Sticky Design: Metrics for Reuse
Here in the Labs we are big proponents of user testing just about everything but some things are easier to test than others. One of the trickiest things to accurately measure, especially in a lab setting, is propensity to reuse. In the current climate of intense competition between apps to claim their market share, and their share of users’ interest, simply being chosen is not longer enough. Retaining users in the long term is often considered to be a far higher priority.
After studying your users, their needs, expectations and what they value through the usual techniques of ethnography, customer journey mapping and the many other tools of user research, there are a few additional techniques that can be helpful to identify which services are going to keep people coming back for more, and which ones might need a little reworking.
Diary Studies are the traditional solution to this problem but sending users away from a lab setting comes with its own set of risks, particularly if there are elements of your new project that you are trying to keep under wraps. If your product or service is intended to see continual use, then a week’s worth of diarized usage can be enough to help you identify the patterns involved.
Long term studies also have value, because they can show you the way that users perceptions of the product change over time, there is always a learning curve with a new product or service, and it is important to see that user satisfaction is higher at the end of that curve than at the beginning.
There is a risk in a lab setting that your volunteers are just going to say what they think you want to hear. Humans have a socialized desire to please, and that can interfere considerably in the answers that you get from users. You need look no further than the “New Coke” fiasco for an example.
You can ask your users how often they expect to reuse a product or service directly, but the way that the question is posed is absolutely vital. Maintaining a neutral tone and showing no vested interest in either answer is a good start but abstracting the question using a 7-point scale, ranging from likely to unlikely, or asking for specific reasons they would choose to reuse, or not reuse the product will provide much clearer data and might even flag up some pain points that can be remedied.
Ranking and Budgeting
Placing your new product in the context of the existing marketplace is another indirect way to harvest re-use data. If you have users list off their current “top 5” most used products or services in the same vertical and then ask them to place your new product into that list, you can find out not only more about your product as it stands alone, but also about the key areas where it falls behind or exceeds the competition.
Forcing users to create a “budget” out of their time and determining how much they would spend on your new product compared to other options is another great way to determine the likelihood of user retention.
In this exercise, we ask users not to talk about the product itself but to rank statements as important or unimportant to their usage decision making. If the statements that represent your product or service’s core design strengths aren’t ranked highly then you may have to consider reworking with others in mind to ensure that you are meeting an actual need.
The Bell of Despair
Even in the context of a neutral lab setting, some users are uncomfortable announcing that they have a problem, whether due to internalizing a pain point and blaming themselves for the failure of the product, or simply due to the social discomfort that they associate with complaining. To counter this reluctance to highlight pain points, we like to use the Bell of Despair. A little button that they can casually tap every time they are having a problem with the product in testing that alerts us to the fact there is a problem, however minor.
Understanding underlying user
Users are often not the best predictors of their own future behavior, which is why its invaluable to conduct research in their own environment. By studying underlying behavior you are better able to understand if the product really is going to address a need. For example, a new messaging app is hardly going to succeed if there’s limited evidence of existing communication with friends/family/colleagues via existing digital channels.
The Best Solution
None of these solutions are a perfect answer, and it is wise to keep gathering data after a product has been released into the wild to see if there are any ways to improve the users’ experiences, but by using these techniques you can acquire some of the underlying data that will let you make an educated guess about whether or not the things that you are testing will become a daily part of someone’s life or gather dust on a shelf.