Support – or Customer Service – is a gold mine of information about your Product. Even so, many companies ignore the data that is generated daily through customer tickets.
Here at Resultados Digitais, recently we started to analyze this data to provide feedback to the development management of RD Station. This post will present some analyzes we have done and challenges we have encountered along the way.
Why Review Support Tickets?
Support tickets are a precious source of information: they reveal what “pains” and obstacles our customers face when using RD Station. Metrics such as engagement give us insight into which features are most used; and indicators such as generated leads try to decipher whether our customers are succeeding or not with our tool; however, only Support is a true and constant source of feedback on what isn’t working so well.
In the case of RD, some suggestions for improving the product eventually come through consultants (who are in direct contact with customers). But Support is the main means that customers turn to to expose their difficulties with the RD Station.
For a company whose main values is to be Customer First , we could not ignore Support as one of the main inputs for Product Management.
How we classify Support tickets
Today, any ticket opened by a customer is classified internally (through our help desk platform Zendesk ) within two dimensions:
1 – Regarding the “type” of the ticket
If the action the customer is trying to perform exists and isn’t working, the ticket is related to a Bug. On the other hand, if the functionality exists and works – but the customer can’t do it alone – then we have a UX issue (User Experience).
It is also possible that the obstacle encountered by the user is related much more to conceptual questions about Digital Marketing than to problems with the software itself: in this case, the ticket is classified as “type” Education.
And so on! Today, we’ve grouped tickets into at least seven different categories within this dimension.
2 – Regarding the “functionality” that generated the ticket
At this point, it’s a matter of splitting the product (in our case, the RD Station) into several features. For example, Email Marketing or Landing Pages issues are classified into their respective areas.
Based on these two dimensions, at the end of each month we carry out an analysis of the data generated by Support.
How We Analyze Support Tickets
Before going into the merits of the analyzes we carried out, it is worth mentioning some disclaimers …
- Forget help desk metrics: Usually, when talking about Support data analysis – and a Google search is carried out on the subject – the concern is to study metrics such as, for example, average first response time, average resolution time tickets, among other indicators. Although important, these are numbers that do not directly interest Product Management. The analyzes we detail in this post are fundamentally different.
- Product > Software: When we talk about “Product”, we are talking about the entire experience our customer has with the RD Station – from the usability of the software to contacting a consultant in the Customer Success area. Consequently, if (for example) many tickets of the “type” Education about Email Marketing functionality are being opened, this should be a concern of the Product Management area!
- Continuous improvement: We’re still learning about which analytics really add value. Analyzing data simply for the sake of analysis ( ie, generating useless pretty graphics) goes completely against our Lean culture and our understanding of what good Product Management is.
- Garbage In, Garbage Out: Sorting tickets in practice is much more difficult than it sounds (and oddly enough, it would be worth a post by itself). We are still learning hard on how to best classify them and, above all, how to maintain a standard as today more than a hundred people within DR have the power to classify a ticket. Just as important as the analysis you do is making sureher inputs are correct.
- There is in ‘One size fits all’: As any conversation about Product Stewardship warns, the analytics that works for us today may not make sense for your company – or for RD itself a year from now. Our approach has been to continually iterate over MVPs from this analysis, stripping out the data that doesn’t add value and keeping the rest for the next iteration. Obviously, someone has coined the term minimum viable analysis’, but the moral of the story is: start with simple analyzes and iterate over them.
Today, we perform three different analyzes on Support data:
1) Trend by “type” and “functionality”
The first analysis we do is on data aggregated over time. For example, what is the average number of Education “type” tickets opened per customer over the past few weeks? Does this curve show an increasing, decreasing or constant trend?
This type of analysis is important when the average of open tickets of a certain “type” (eg Education) is high and actions are being taken to reduce it. Normally, when there is a decreasing (or increasing) trend on this curve, we drill down it by functionality – that is, we repeat the same graph but focusing on only one functionality. With this, we try to find out in which features we improve (or worsen) our customers’ education.
On the other hand, when we notice that the curve has been constant for weeks, it is a warning sign that we are not acting or that the measures taken have had no effect.
The same analysis is repeated for all functionality. When some functionality shows an increasing ticket trend, for example, we also drill down the curve to discover the problems with that functionality (Bugs? UX? Education?) and prioritize measures for the next month.
2) Use of functionality vs. Problems with functionality
Engagement metrics tell only half the story. Is it “easy” to find out what are the most used features of your product, but what features does the user find most difficult? Are there little-used features that generate a lot of tickets? And what is the “type” of these tickets?
Usage data for each feature is obtained through Evergage, a tool we use to monitor customer engagement with our product. Technically, “usage” is measured by the percentage of accounts that have taken a certain action within the functionality in a weekly period. This information, combined with the data coming from Zendesk, results in the chart above.
Today, we pay attention primarily to two quadrants. Quadrant II is an example of features in a “normal” state, that is, with high utilization – consequently, with a high number of tickets per week. Features in this quadrant have greater weight when prioritizing improvements, quick wins and bugs.
The fourth quadrant should always be empty: it indicates “problematic” features that, despite low usage, generate a lot of tickets. We now have important functionality in this quadrant and we’ve already prioritized product actions (eg enhancements ) and education (eg Help Center articles) to “push” this feature to the left and up.
3) Launch of new features
The release of a new feature – or an improvement to an existing feature – goes far beyond putting that feature on the air. Along with the rollout, there must be a synchronized effort to educate the base and communicate the novelty.
Along with engaging customers with this new functionality, one of the Success Criteria we take into account in determining whether a release was successful or not is the impact it has had on support.
Why should you review your Support tickets?
Support says a lot more about your Product and your customers’ difficulties than you might think. If you still don’t review tickets opened by your customers, get started today! Create simple analyzes and iterate over them. As trivial as these reviews are (and they really are), we’ve already taken a number of important insights that will allow us to take our product to the next level.
But, let’s not stop there. We want to go a little further than the analyzes we’ve carried out today. We want to reach a level that allows us to take strategic insights for the development of RD Station. For that, we are experimenting with a new analysis of Support tickets; this will be the subject covered in the second part of this post.