Customer experiences: It’s what you don’t know that counts
The rise of big data has driven a lot of change around the boardroom table. It’s longer now, with more laptops on it. Where once sales, finance and operations steered the ship, we now find data expertise represented at the highest levels, with the CXO (Chief Experience Officer) sitting alongside other relative newcomers like the Chief Innovation Officer or the Chief Social Media Officer. It’s the digital realization of the old business adage, you can’t manage what you don’t measure.
There have also been significant advances in our ability to capture and process customer feedback. Metrics such as NPS (net promoter score), CSAT (customer satisfaction score) and CES (customer effort score) are now commonplace, easy to understand measures that quantify the efficacy of customer experience programs. But when it comes to understanding customer experiences, measurement in the traditional sense of the word, is only half the story. In many respects, the role of CXO sums-up another, older gem of conventional wisdom… you don’t know what you don’t know.
What every CXO knows: Surveys aren’t perfect
Perhaps the biggest challenge facing the CXO is surveys are often subject to the law of diminishing returns. If every call to the customer support center generates an email prompt to complete a CSAT survey or follow-up SMS requests to rank visits from on-site engineers, the feedback process is at risk of becoming a self-referencing paradox where the customer experience is adversely affected by being asked to too many questions about how to improve it.
And this problem won’t go away either, which is a big headache for the CXO. A decline in response rates is inevitable as consumers are increasingly bombarded with requests for feedback. You can reduce the amount of feedback per request, or offer incentives, but there’s a thin line in the customer’s mind between automated research tools and spam.
But even when you have overhauled the survey process and it’s working, there is still the challenge of overcoming the inherent limitations of survey data, as we’ve discussed in a previous post.
Survey data vs ‘measurement blindness’
Studies show that most people, when offered a ranking system from 1 (bad) to 10 (excellent), will opt for a number somewhere in the middle, skewing the data towards an average score that doesn’t reflect the totality of the customer’s experience. Text entry boxes on forms are there to add detail, but are often left blank. The upshot is a lot of CSAT scores can be, on aggregate, blunt instruments. They identify significant issues easily, but they don’t provide much insight.
Inconclusive surveys make for conservative decision making. As in ‘if it ain’t broke, don’t fix it’. Or commoditized thinking where you focus on generic issues, like the feature spec of a product or the price point, missing other key customer experience factors such as overlooked interaction cues, the level of content appeal, engagement, or the ‘out-of-box’ experience when you get the product home from the shops.
A more nuanced problem is the danger of looking for patterns in customer research data. Underlying assumptions and preconceptions mean teams often look for answers that explain the problems they’re expecting to find, but miss other issues altogether. Similarly, semantic analysis will offer some reasonably accurate analysis, gauging the level of positive or negative sentiment within verbatim CSAT text, but the level of genuine insight algorithmic tools offer is often disappointingly low.
Inconclusive surveys make for conservative decision making. As in ‘if it ain’t broke, don’t fix it’.
There’s a real need to establish a conceptual framework that understands the customers’ goals and behavior, beyond the reach of the survey questions they’re being asked. It’s not uncommon for in-depth, qualitative studies to correlate with CSAT scores, but come to very different conclusions about the causes of customer pain. CSAT scores will identify a problem, but without a conceptual framework that explains the customer’s behavior, the CX team will have a hard time working out what’s causing it.
As an example, I once worked on a project where a client experienced a high volume of calls to their contact center about simple product support issues. They had an online support center that covered all the relevant information, and assumed the contact center traffic was a result of the online support center being too hard to find, and use. But despite a lot of work to raise its visibility within their main website and improve the support center’s usability, the calls kept coming.
When we spent time in their contact center we realized the customers were beginning their support journey by Googling for answers, and the Google results were sending them to online forums and so on. The CSAT data captured the customer support pain, but the underlying problem wasn’t anything to do with the online support center itself, it was getting it to rank higher in Google searches. That’s a behavioral problem, and no amount of CSAT surveys about their support center could identify that.
The ‘measurement blindness’ problem is compounded by the fact some critical aspects of customer experiences are harder to measure than others. For example, it’s difficult to capture CSAT data for physical points of sale and third party distributors, or capture them equally well across multiple channels from mobile to face-to-face. There’s a risk of focusing on the richest data sources and overlooking the rest. The only way to compensate for this partial lack of insight is with an immersive research approach to understand the whole experience from the customer perspective.
CSAT scores will identify a problem, but without a conceptual framework that explains the customer’s behavior, the CX team will have a hard time working out what’s causing it.
Diving deep: A 360 approach for smarter CX measurement
Immersive research means going to the customer and understanding the context of their relationship with a product. Immersive approaches (like using ethnographic techniques to understand the cultural context around a customer’s experience) focus on the customer in a more holistic, personal sense. The aim is to understand the customers themselves, and not just an instance of their customer experience via a specific channel or touchpoint.
The first step to an effective, 360 CX programme is to gain an understanding of the way your customers view your domain. With so much disruption in the media, finance and retail sectors, and the step changes in consumer behavior, a regular deep dive into the cultural aspects of how your customers relate to your domain is essential.
Most companies have a grasp of their marketplace from sales metrics and conversions, plus the usual customer experience metrics. However, understanding how their customers view the broader context of the market and the role it plays in their lives is essential. Creating behavior-based personas for different kinds of customer is a core approach. Personas often provide a better way to understand your customers because they cut across traditional demographic measures and broader metrics like the size of their organization, their professional status or their income bracket.
Secondly, improving the quality of the quantitative data is critical. NPS, CSAT and CES research is still invaluable but it needs to be more carefully controlled and acknowledge the customer’s contribution to address the inherent limitations of surveys. It’s a question of measuring the right things, not the things that are easy to measure, or assuming every interaction is an opportunity to measure something.
Never trust the future of your business to just one measurement technique.
Finally, surveys should always be complemented with immersive field research, which understands the customer’s needs, goals and the role they play in the customer’s life. Techniques like Customer Journey Mapping (replacing old style funnel diagrams with non-linear customer paths through point of sales, online, mobile, and social media channels), ethnography and user diary studies are essential tools that add context to your quantitative data. Smartphones can play an extremely useful role in field research. We’ve all seen how effective snapping an image and posting it to Twitter can be for customer service interactions… but the broader potential for developing 360-view CX programmes using smartphones is huge.
I once worked with a streaming media product that was optimized for landscape layout and wifi connections. When we did a diary study, asking users to send us images of their screens, and snaps of where they used the product to add context to their CSAT data, the results were amazing. Mostly portrait layouts and pictures that were taken outside where there was no wifi! Examples like this, make the case for immersive research perfectly. Never trust the future of your business to just one measurement technique.