Blogs

8 VoC Success Factors for Huge Buy-In & Engagement & ROI

By Lynn Hunsaker, CCXP posted 14 days ago

  

Only a third of CX managers say their VoC is “good” or “very good” at making changes to the business. This means two-thirds of VoC is not good at driving change. Only 15% said their VoC is "very successful" at this.

(This is the same trend for the past 10+ years in Qualtrics State of CX reports and Temkin Group's State of CX reports.)

Return on investment depends on gains after vs. gains before spending.

Before-and-after gains are highest when improvements are made to become more in-tune with customers.

Why? Out-of-tune practices are the source of costs to serve. In-tune practices have magnetic attraction for existing and new customers, partners, employees, and investors.

This is a 3-part article series at ECXO about urgent solutions to this gap.

Better measurement is urgently needed for higher quality data. Mainstream VoC practices are hurting data quality, which reduces buy-in, engagement, credibility, and CX job tenure. Here are 8 better VoC practices:

1. Use stratified random sampling
When you ask every customer for feedback, you have no control over who responds. Statistical significance (also known as confidence) requires responses from different types of customers according to their proportion of your market. Under-representation and over-representation of certain types of customers means your data has low statistical significance. This gives managers low confidence that their attention to your data will be productive.

Instead, use stratified random sampling. For each customer segment, use a sample size table to determine your quota. Then, randomly invite sufficient customers to meet that quota. This ensures realistic representation. And it reserves uninvited customers for participation in other studies this year, preventing burnout and increasing response rates.

2. Rephrase questions from customers' view
Choice of words is very important. When you use words like "knowledgeable" and "responsive" and "recommend", customers must judge you, and that's hard and unpleasant. It's self-centric rather than customer-centric.

Instead, look at Support conversations to see how customers talk about what frustrates them and what pleases them. Use those phrases. For example, you'll find "How knowledgeable was X?" can be replaced with "How informative was this for you?". 

3. Pre-test questions for consistent interpretation
When different types of customers interpret your questions differently, you can't accurately interpret their responses. For example, "on-time delivery" may mean within a few hours or days time frame, or what was originally promised, or what was originally expected, etc. Inconsistent interpretations mean your data has low statistical validity. 

Instead, pre-test your questions with 6-12 different types of customers. Give them the phrases as word strips, digitally or in-person. Let them organize the word strips in any patterns they want. Ask them what it means, and take careful notes, without biasing their story. This shows you how to rephrase questions for universal interpretation, or to use certain phrasing in separate questionnaires for different types of customers.

4. Pre-test scales for consistent interpretation
Likewise, when different types of customers interpret your scales differently, it's impossible to accurately interpret their responses. This means low statistical validity. Managers receiving your report will be mis-led.

Instead, pay attention to what customers say during your word strip exercise above. Do they see certain things as a Yes/No situation? Are some things easier for them to judge as Low/Medium/High? Get a sense of how many points your rating scale needs for each question. Then, group questions with similar rating scales together in your survey. 

5. Customize to what's natural for each segment
One size does not fit all. Technologies make it easy to customize scales and questions specific to what's natural for each type of customer. 

Use different questionnaires or branching to tailor the scales and questions for customers' ease and consistency. Make sure VoC is customer-centric. It makes no sense that a CX effort is a bad customer experience itself.

6. Avoid biasing via colors, icons, instructions
It's unnecessary for a survey to require teaching customers what your scales and words mean. Colors, icons, and instructions can cause customers to feel ashamed or guilty or pressured to say something that does not truly respect their thoughts. Then your data is inaccurate. It's a waste of everyone's time and efforts. 

Instead, discover how customers see things, and use their words and their gradations of goodness. Use branching or separate questionnaires to make participation as natural and easy as possible. This will increase response rates and data accuracy. Good CX means respect. Remove all pressure and all implications of what you expect them to answer. Seek pure truth.

7. Ask more than 1 question: for correlation
When you ask only "How likely are you to recommend us?" and a comment question, priorities for improving CX are impossible to statistically determine. As stated at the start of this article, impressive ROI requires a shift from out-of-tune to in-tune business practices.

Instead, ask about the things customers are most emotional about in Support conversations. These things are Moments of Truth: points in their experience where they're inclined to disengage or engage more. When you ask a series of Moments of Truth questions, those ratings can be correlated with your overall question (e.g. likely to recommend or rebuy, etc.). 

Statistical correlation analysis provides a coefficient for each question. The highest coefficients are key drivers of loyalty. Prioritization by correlation analysis has proven to be more accurate than asking customers "how important is X" or other methods such as quick wins. Part 2 of this article series will explain more about this.

8. Segment by expectations, not demographics
We typically segment customers by industries or usage rates or gender or generation, etc. However, what really matters is whether you meet or exceed expectations. Most likely, people in the same industry have a variety of expectations. The same applies to all typical segmentation criteria. 

Instead, look for patterns in customers' Moments of Truth. Do different customers have differing outcomes in mind for their Moments of Truth? The outcomes they're seeking can also be called jobs-to-be-done. It's their ultimate aim that is most important for segmentation. For example, is one group of customers aiming for risk minimization via your brand, while another group of customers is aiming for cost minimization? 

When you segment customers by their ultimate aim, then you're setting up your questions, scales, data analysis, and managers' actions to meet or exceed expectations. 

Conclusion
Better measurement will improve customers' experience in many ways: more pleasant VoC participation, higher response rates for data accuracy, truer data for managers' decisions, higher credibility of your CX team, greater by-in and engagement of managers, and changes that make experiences in-tune with customers. All of this improves ROI and happiness for everyone.

See the original article referred to in the opening paragraph and graphic here:
https://clearaction.com/15-voice-customer-programs-successful/ 

How to get more advice like this? ClearAction.com/ccxp and ClearAction.com/leader (use the CXPA member 10% savings code at the bottom of this web page: CXeducation.com)
0 comments
14 views

Permalink