Response Design Corporation:Creating the Uncommon Call Center
< 
Kathryn's Uncommon Call Center Blog
June 23, 2008 02:01 PM
Kathryn
Business Analytics - Paralysis by Analysis or More?

As little as a few years ago I heard executives tout the great numbers being presented after data mining. The result of the mining was the end. However, now, managers are looking to the analytical result as the beginning. Business users use the analytics to answer two critical questions. They want to know how they turn the information into action and what effect each action will have. It is not enough to simply know the results of the analytical process. Data mining used to discover an interesting group of customers. Now we need to move to action and measurement by saying, “Here are a group of people that should be presented with product X when they call inquiring about product Y. By following this course of action we project a 20% lift in call center revenues over the next 3 months.”

Our industry is excited about business analytics (i.e., the gathering and analyzing of business data in order to make better informed business decisions). However, in most companies there is a division of labor between the analytics (business) user and the analyst. Although the business user is an expert in his area it is most unlikely that the he is an expert in data analysis and statistics. The needs of both players need to be aligned. This is not an easy job.

For example, Kohavi, Rothleder, and Simoudis (2002) highlight the following challenges most organizations face as they try to create relevant, accurate, and timely analytics.

1. The time to crunch the numbers and analyze the data is never fast enough. But then, will it ever be? Will we just keep demanding faster and faster answers? When should we redefine “real-time” to “right time?”

2. Business users want to be a little more self sufficient. They want user-friendly interfaces that will allow them to rely less on other people to get the answers when they want them.

3. Data collection and analysis isn’t targeted. We want it all whenever we want it. We don’t take the time to define clear business goals and metrics. In the past, unrealistic expectations about data mining “magic” led to misguided efforts without clear goals and metrics.

4. We want analyze data that must be integrated from multiple sources. Most of the time we don’t have an efficient and cost effective manner to do this. The extract-transform-load (ETL) process is usually complex and when it is considered, the cost and difficulty are usually underestimated.

Most every company I work with is creating on some type of dashboard and/or scorecard. Both of these are delivered through some type of business analytics. It is a grueling process and I run up against each of the challenges mentioned above. I find that the biggest hurdle is to get the business user to define the business goals and metrics. When most people go down this path they believe, “If I can measure it, it must be important.” Many people become fascinated with “quantity” rather than quality. The “fun” side of seeing all these correlations and neat analysis paralyzes them. Rather than thinking of what action they will take and what effect that action will have, they enter into the common analytic stupor of “paralysis by analysis.”

Kohavi, R., Rothleder, N.J., Simoudis, E. (2002). Emerging trends in business analytics. Communications of the ACM, 45(8), 45-48. Retrieved June 22, 2008, from http://ai.stanford.edu/~ronnyk/cacmEmergingTrendsInBI.pdf

May 16, 2008 12:51 PM
Kathryn
Free Call Center Metrics Survey

Participants Wanted

We have updated our free call center metrics survey. When we get enough data in the database we will produce a report for all contributors. You can download the survey from the site before taking it. For those of you ready to take the metric survey, here is the link. It is now updated and ready to go.

http://www.responsedesign.com/metric_survey

Enjoy!

November 13, 2006 12:37 AM
Kathryn
Determining what to measure

Companies need a balance between what they are trying to achieve and what they are measuring. Before you determine what to measure, have a clear understanding of:

What your strategy to compete in the marketplace is.
Departmental measures should be linked to corporate strategy. Everyone should be heading toward the same goal. Often companies find that one department may be establishing metrics that compete with another department's measures. Needless to say, this often works against customer satisfaction. To help departments stay focused, ensure that the corporate strategy is well defined and clearly communicated. Ask each department to keep its measures related to the corporate strategies and measures.

What your customers want.
Companies fail when they develop a measurement system without asking customers what it takes to keep them. Your list of prospective projects could be endless. Your customer can tell you which things are the most significant. Make sure the metrics you choose are based on input gleaned from talking with (and surveying) your valued customers.

Keeping these goals in mind, choose metrics that are recurring, stable over time, and reflect project management level metrics, such as a balanced scorecard would depict.

November 6, 2006 12:36 AM
Kathryn
Effective measures reflect corporate strategy

The measures and goals of contact centers must reflect the strategy of the entire organization. Employees should be able to recognize a direct line of sight from the contact center measures to enterprise measures. Everyone employed by the organization should understand what is important to the enterprise and how he or she, as an individual, can contribute to that strategy. The employees must also know how their contribution to organizational success is measured.

We can’t copy the measures, metrics, calculations, or standards of another contact center. The reason that we establish a measure is as important as the measure itself. Measures vary from organization to organization and are based on the specific needs of an organization. As many times as I have helped organizations build their metric systems, I have yet to find a standard set of measures that works for everyone.

October 30, 2006 12:34 AM
Kathryn
Correlation versus cause and effect

Mark Twain once said, “There are three kinds of lies: lies, damn lies, and statistics.”

Let me tell you another one of my pet peeves. Statistics are an important component of becoming excellent. But watch out. Watch the correlations that are made between different contact center data elements. When some people compare data and find a correlation, they automatically assume that the two elements have a “cause” and “effect” relationship.

Finding the wrong meaning in correlations can be dangerous. Let’s say I find a statistical correlation between the number of drowning victims at a beach and the number of ice cream cones sold at the same beach. I would be in error to claim, “because of the correlation, we believe that ice cream cones cause drowning.” That’s probably not the story; a better explanation would be that on hot days, the number of people at the beach increases, the more ice cream is purchased, and the more people are in the water exposed to drowning risk.

The second problem with correlations is that people seldom address what is causing what. A correlation doesn’t indicate which factor is the “cause” and which is the “effect.” One of our studies showed a positive correlation between marketing a contact center to the rest of the enterprise and lower agent turnover. But, we didn’t do additional work to identify whether “marketing” was the cause and “lower turnover” the effect or the other way around. Sometimes you can “guess” at the direction of the “cause / effect” based on insight, experience, and logic but you never really know for sure without further research.

We should watch the data source and understand how the researchers gathered the data and drew their conclusions. And researchers need to understand the complexity of the call center environment. Without the understanding, they may draw the wrong conclusions.

Once, when I was managing a contact center, a customer complained to my boss about receiving a busy signal when she called. This boss saw a positive correlation between busy signals and customer complaints. He called me in and strongly suggested that I add more lines so no customer would ever get a busy signal again. I told him that I would be glad to, but we would probably need to add more agents. If we did not, we would be trading one problem for another -- the customer would have to wait on hold longer. My boss saw the correlation, but didn’t understand all the contact center linkages and complexities. What seemed to him like a quick fix was much more complex.

October 23, 2006 12:31 AM
Kathryn
Gaps versus waste

All of us want to be fiscally responsible. That means we have to invest our resources where they count. Sometimes, we invest in addressing problems that are “easy” to solve, but don’t gain much value in the process. Other times, we invest in areas that we think are important, but we can’t really back up our thoughts with hard data.

I was in a meeting with an executive who said he didn’t want any customer to wait longer than 20 seconds in queue. He thought that no caller on any day or during any time of day should wait longer than 20 seconds for an agent. So we told him we would put together a financial model to help him understand his initial investment. Guess what. When we ran the model for his 50-seat call center, we found that he needed an initial investment of more than $5 million. We told him, “The difference between your peak and average-hour call volume indicates we would need to triple capacity to make certain we got close to making this happen.” After he recovered, we asked him why he wanted to do this. He said he thought it would look good in a marketing brochure.

We talked about what was important to his customers. He didn’t know whether the current wait time was an issue. We conducted a customer survey and discovered that 20 percent of his orders were shipped with missing or incorrect items. The customers really wanted him to invest in fixing the shipping problems!

When he did remedy the shipping issues, guess what happened to his call volume and his wait times?

Right. They fell because irate customers weren’t calling about incorrect orders anymore.

This is a perfect example of gaps and waste. It would have been a waste of money to invest in the 20-second wait time. The real gap, in the customers’ minds, was getting the orders right. Investing in fixing this gap would give the executive far greater return.

Be careful about gaps and waste. Gaps occur when we aren’t meeting a need and waste occurs when we are providing something no one values. Investing in what is of value is a smart use of our precious resources.

October 16, 2006 12:29 AM
Kathryn
Best-practice benchmarking versus surveys - Part 2

Surveys can be problematic. You know the old story about the lemmings that follow each other over a cliff to their death? Surveys may be telling you that everyone is heading a certain way but, if you look hard enough, you can discern where they are going.

Let me illustrate with a story from my recent past. I was in Philadelphia driving to a meeting and could not find a parking place anywhere. Finally, I found a lot that wasn’t full. So, I pulled in. When I did, the parking lot attendant ran up to my car yelling hysterically. He kept pointing to the “entrance” to the lot, and yelling at me that I had come in the wrong way and had just run over the spikes. (You know the ones; they are the in-ground “teeth” that keep people from stealing a car.)

Needless to say, all I could think was, “Great, not only am I late, but I’ll also have four flat tires when I return from the meeting.” I glanced in the rearview mirror and noticed that another car had followed me in the same “wrong” way. The parking lot attendant ran to the driver of the other car to alert him of the danger. The other driver pointed at me and yelled, “But I followed her!” To which the parking lot attendant yells right back “Yes, and she ran over the spikes as well!”

To me, this is a great story about the difference between following someone you know has mastered a best practice and someone whom is simply “feeling” his or her way through trial and error. (By the way, I had no damage to my tires; the man who pulled in behind me had four flats.)

So, be careful. Know when you need to benchmark (and how) and when a survey will serve you just fine. Don’t drive four hours for nothing, and please don’t fall over the edge of a cliff!

October 9, 2006 12:27 AM
Kathryn
Best-practice benchmarking versus surveys - Part 1

Let me tell you one of my pet peeves. Many people don’t know the different between best practice benchmarking and surveying, and they apply the results of each in all the wrong ways.

Best practice benchmarking is conducted to find best practices. (That’s an eye opener, I’ll bet!) Surveys, on the other hand, simply tell you what other call centers are doing. It may or may not be a best practice.

A best practice is a double-edged sword. What seems to be a best practice for one call center may not be for the next. Not all best practices make fiscal sense to a call center in its quest for excellence. Companies can no longer pursue “best-in-class” without demonstrating fiscal responsibility.

When benchmarking first came on the scene, people were excited and wanted to benchmark everything. Benchmarking nay-sayers claimed that the benchmarking teams were engaged in “industrial tourism.” People were visiting contact centers just to get out of the office -- and maybe get a few good ideas. Strict preparation was seldom done and agendas were seldom followed.

I spoke with Sally just after she had been promoted to contact center director. She wanted, in the worst way, to visit another center. She finally found another center manager who was willing to let her visit. Sally tried to sell the trip to her management as a benchmarking opportunity.

They didn’t buy it.

Well, Sally decided to go on her own. She drove four hours to the other center. When she arrived, the other director graciously ushered Sally into his office. As they settled in, the center director politely asked Sally what her questions were. Sally hadn’t thought to write them down. She was able to think of some on the spur of the moment. Sally continued to emphasize that she really wanted to “see” the call center.

After 60 agonizing minutes, the center director finally said, “Well, you want to go take a look?” Sally was elated. They both stood up to walk out the office door. Immediately after crossing the threshold, the director stopped, made a sweeping gesture with his right hand and asked, “Well, how do you like it?” Sally was sure there would be more, but after a couple of very quiet moments during which the director didn’t move from the doorway, she answered, “It’s very nice.” The director commented, “Yes, we think so.” The director then returned to his desk to wrap things up. Sally thought, “This is it? This is what I took a day off work for and drove four hours to experience?”

No, Sally. There is so much more – especially in the preparation!

October 2, 2006 12:26 AM
Kathryn
Indicator versus diagnostic metrics

Most people measure the wrong things in the wrong timeframe. Sometimes, we measure tactical actions when we should be looking at results. How many actions do your agents take to produce the desired result? Wouldn’t it be easier to simply measure the result, and, if the result is not as expected, measure the actions?

Our indicator metrics need to keep us focused on the right things. If an indicator metric performance is weak, then we can dig deeper into what I call “diagnostic metrics.” These diagnostic metrics are usually a pitfall for most organizations (like the company measuring more than 100 metrics per agent per day). If you focus on diagnostic metrics, you never have time to change. You spend all your time looking at the numbers trying to figure out what they are telling you.

The concept of indicator and diagnostic metrics holds true for every aspect of the contact center – agents, operations, service level, etc. All contact centers I have worked with (no matter how unique or complex) have indicator metrics. Figuring out what these indicators takes time, but it is time well spent.

People look at micro-level (diagnostic) metrics when they don't need to. If you are examining a mountain of reports all day long, you'll never have time for strategic thinking nor will you have time to invest in your people. The answer lies in knowing when to look at certain data and when to ignore it.

September 25, 2006 12:24 AM
Kathryn
The measurement pendulum

During the APQC call center metric study, it was my job as subject matter expert to determine the best call center metrics. Can you guess the conclusion of the nine months of study? Nobody knows what is best. We found a measurement pendulum.

The pendulum swings one way: you measure everything. One call center measured more than 100 different metrics per agent per day, and still managed to win a prestigious quality award. The pendulum swings the other way: you measure practically nothing. When we asked company managers what they measure currently, they told us, “Now we are down to measuring only two things.”

I interviewed another best-practice company, well known for its incredible service. I asked, “What do you measure?” The company representative said, “We measure one thing.” I thought, “Oh good. You know the answer.” I asked, “What is that measure?” He said “How much time during a person’s shift are they available to take calls.” My next question was, “Do you think that is the one right metric?” He said, “Kay we don’t have a clue but we were measuring everything and we got tired of measuring everything. So, now we only measure one thing. Now we’re going to figure out what metrics truly tell us, in a balanced way, how we’re doing.”

We need to measure what matters and stop the pendulum swing.

September 18, 2006 12:22 AM
Kathryn
Measure what matters

I was the subject matter expert for a call center metric study sponsored by APQC, a Houston-based, not-for-profit research organization. We spent nine months gathering data, interviewing people, and attending site visits to determine the best call center metrics.

We found that much of the variation in measurement systems can be explained by the newness of the industry. Many call center managers continue to be in the “trial-and-error” stage of development, and are looking for the one “magic measure” that will solve all their problems.

We also discovered that measures need to change as the company changes.

Contact centers predominantly use three methods to determine what they will measure: benchmarking (used by 80 percent of participants), customer expectations (used by 68 percent of participants), and industry standard (used by 64 percent of participants). What companies measure is correlated directly with:
Company culture - the number and types of measures depend on the company’s focus, for example, companies that lean toward employee involvement have a different measurement system than a company leaning toward a command and control structure does.
Age of operation - the younger the company, the more volatile the measures.
Experience level of the people - the greater the experience level, the more stable the measures.

September 11, 2006 12:20 AM
Kathryn
Motivate and illuminate through measurement

A recent survey states that measurement-managed companies are more likely to:
• Be in the top third of their industry financially,
• Have completed their last major organizational change successfully,
• Enjoy favorable levels of cooperation and teamwork among managers,
• Undertake greater self-monitoring of performance by employees,
• Have employees who are more willing to take appropriate risks, and
• Have senior executives who are better able to manage and lead their organizations.

Though contact centers have been operating successfully for several decades, the industry is relatively new and many centers find themselves struggling to define a measurement system that is meaningful to the organization. While all agree that measuring performance is important, many centers are grappling with what should be measured and how. The critical factor for call centers is to find the measurements which are indices for successful business and to make those measurements understandable across the organization.

We measure a lot, but we don't know why. In the customer contact industry, we've always had technology that delivers an incredible amount of data to the management team every day. Managers believe that, because these numbers are produced, they must be important to look at. So, they diligently spend time trying to decipher the data, thinking they will unlock the secret of the contact center's performance.

Unfortunately, many of the numbers don't tell us anything; data by itself is meaningless. The only time data provides insight is when it's put into context. The primary reason we measure should be to accurately assess current performance to determine the best way to invest our resources. In my opinion, one of our greatest downfalls is we over-measure and under-manage.

September 4, 2006 12:16 AM
Kathryn
Defining measures, metrics, calculations and standards

The way we measure our contact center depends on our position in the company, the roles we take on, and the new processes and technologies we implement. Everyone measures differently; however, we should all revisit our measurement philosophy often.

Does your measurement system reflect what is truly happening in your organization? What are you concerned about? Measuring too many things? Measuring too few? Deciding what action you take after you’ve examined the measured results? Presenting a data-based case for improvement?

In the next few blog entries, I’ll share what I have recently learned about measurement. Here are some definitions that I’ll use in this series:
1. Measure: a category of metrics; an example of a measure is “effectiveness.”
2. Metric: a single entity within a measure that indicates performance; an example of a metric is “sales per hour.”
3. Calculation: the formula defining a metric in the specific environment of each call center (technology, data sources, etc.).
4. Standard: the target measure for each contact center. Standards are set for both the “ultimate” goal and the interim “stepping stones” that centers accomplish on their ways to achieving the ultimate goal.

During this series, I will use “measures” as the generic term. Stay tuned, we’ll launch into the fascinating world of call center measures next.

June 19, 2006 12:42 AM
Kathryn
Customer data and how to get it

According to a recent survey, 58 percent of CEOs believe they do not have enough information about their own customers. And they say they do not know how to get it.

They must not have call centers. If they did, they would have a wealth of customer information at their fingertips. Most contact center professionals are working feverishly to distribute customer information to everyone who needs it. When CEOs or department heads request more data, contact center managers gather it in a myriad of customer-friendly ways.

“Drip irrigation,” for example, is an effective, low-cost method of collecting data in which customers are asked for small details about themselves during the course of their conversation with the agent. Each tidbit is not worth much alone; but multiplied, the information can show trends and hidden dissatisfiers.

Would your CEO have been one of the 58 percent if he or she had been surveyed? If not, what steps have you taken to ensure he is in the know?

June 6, 2006 04:20 PM
Kathryn
Critical control points

How many measures do you track in your contact center on a daily, weekly, or monthly basis? Ten, twenty, a hundred? Do you know why you track each metric? Do you think watching each measure every day is critical, or do you only measure certain elements after a warning? Have you categorized your metrics according to “cause” versus “effect?”

If you struggle with your measurement system, Temple Grandin, the author of “Animals in Translation,” has terrific insight regarding how to set up a successful measurement system. She attributes her measurement success to understanding critical control points. Ms. Grandin defines a critical control point as a “single measurable element that covers a multitude of sins.”

Ms. Grandin says, “When I’m auditing the animals on a farm, one thing I want to know is whether the animal’s legs are sound. There are a lot of things that can affect a cow’s ability to walk: bad genes, poor flooring, too much grain in the feed, foot rot, poor hoof care and rough treatment of the animals. Some people will try to measure all of these things. But that’s not my approach. I measure one thing only: how many cattle are limping? That’s all I need to know. If too many animals are limping, the farm fails the audit. After that, the farm management has to figure out what is causing the limping and then how to fix it.” To Ms. Grandin, this one critical control point (limping) covers all the possible reasons (poor flooring, foot rot, etc.) an animal might be experiencing the undesired outcome (unsound legs).

What areas of your contact center do you want to know are sound? Are you measuring the critical control point for each of those areas? If you have more than 10 measures, then you probably aren’t aligned to this critical control point measurement method. But if you are one of the “critical” elite, please let us know the control points you have identified.

May 19, 2006 02:04 PM
Kathryn
Using a "dose of reality" report

I smile each time I think about a recent conversation I had with a contact center professional. As we talked about how the next generation contact center would be defined, he told me about his innovative "DOOR" (dose of reality) report. I would like to thank Linksys for allowing me to share this concept; I think it should be a staple in all contact centers.

The director of the Linksys centers personally pioneered the process about two months prior to rolling it out to his management team. Now that he is convinced of the idea’s value, he asks each member of the management team to monitor a minimum of two calls per week every week. The team discusses specific customer issues illuminated by the calls at the director’s weekly staff meeting. The team communicates the issues to the appropriate TAC (technical assistance center) sites, and then tracks the resolve. If the customer’s issue is not resolved, the management team notifies the contact center’s internal escalation team to follow up.

The team searches the DOORs not only for customer-specific issues, but also for trends. Identifying the trends allows it to develop and assign corrective actions. The director reviews progress against these actions, also in his weekly staff meeting.

The director believes the process helps sensitize managers to the issues their customers face every day. The understanding creates a greater sense of urgency that drives improvements in overall performance and keeps the team focused on real issues.

Lynksys is now implementing an automated reporting process so it can expand DOORs to the entire executive management team. One team member is creative in how he approaches his monitoring assessment. He converts the sound files and downloads them to his mobile audio player. He listens to the calls during his commute to and from the office. His only caution to other DOOR participants is that sometimes he finds himself trying to talk to the technician to make suggestions while driving. He becomes fairly wrapped up in the process and forgets it’s not “live.” Lynksys doesn’t want to cause any accidents!

What innovative “next generation” procedures has your contact center team adopted?

Sign up FREE for the Uncommon News
SIGN UP FOR THE RDC BLOG FEED (RSS)