10 Product Instrumentation Errors and What We Learned From Them

error

In the product context, the instrumentation is the ability to measure and monitor metrics important to product management, such as the adoption or retention of a feature. This is essential so that the product team, especially product managers, can make decisions based on metrics instead of feelings.

Here at Resultados Digitais we use Mixpanel, integrated with Segment.io , to centralize these metrics and make them more easily accessible to product managers. But this post is not about Mixpanel. It can be generalized to any tool, as it focuses on the lessons learned when instrumenting the RD Station.

Mistake 1: Delegating product instrumentation planning to developers

Just as you don’t leave it up to the developer to decide which functionality will be built, it doesn’t make sense to leave it up to him what you will implement to manage the product. The development team doesn’t always know what you need. Delegating this responsibility to developers makes no sense.

Developers will definitely be the ones implementing the instrumentation, but using your planning as a starting point.

It is worth emphasizing here not to confuse product instrumentation with engineering instrumentation, the latter being under the full responsibility of the development team.

Error 2: Not planning the instrumentation

If it is a mistake to leave instrumentation planning in the developer’s hands, it is also a mistake to think of it only in general terms. For example: “I want to measure the retention of the feature” or “I want to know how many users accessed the feature”.

Passing this information on to the developer is the recipe for having an incomplete, confusing implementation that, in the end, won’t provide the data you want. The instrumentation plan must be written so that the developer can implement it without having to think about anything but the code.

What to plan: – Name of events and their properties; – When, where and how each event is triggered; – What kind of analysis do you intend to do and how often.

Mistake 3: Instrumenting Absolutely Everything

Instrumentthing all the things!

Having a lot of data might seem desirable, but you need to ask yourself “what conclusions can I draw by looking at this metric?” If you don’t know, it’s probably useless to instrument it.

Of course, you can think “will I ever need this data? I will instrument”. In addition to not being very consistent with agile principles, chances are you will or never actually need or need something slightly different, making the effort a waste. It’s also important to remember that instrumentation generates code, and the more lines of code you have, the more complex and costly it is to modify the software.

What to instrument:

These metrics vary from team and product, but in general they can be classified in the Pirate Metrics (AARRR: The ACQUISITION, The tivação, R etenção, R ecommendation, R eceita) and on customer success metrics.

An example of these metrics was when we changed the RD Station interface. Since it was possible to choose between the new and the old interface, we instrumented the adoption and retention metrics:

  • Adoption: The percentage of the total user base that had ever accessed the new interface.
  • Retention: of those users who have ever accessed the new interface, what percentage of them have remained in it.

Error 4: Not having event name conventions

These are not the events you are looking for

Soon your number of events will grow from 10 to 100+ and then finding what you want becomes a painless task. Also, without a clear naming convention, it is very likely that you will soon have events that look like they measure one thing but actually measure another.

Mistake 5: triggering events that don’t actually measure what you think they measure

This was a recent learning experience we had: when we launched the new RD Station, full of new features, users were greeted when logging in by a welcome modal, presenting the changes. This modal had 7 slides, each of which could be closed or moved to the next or previous one.

We wanted to measure how far our users read news and, if necessary, send in-app notifications of that news that weren’t read. Then we implement an event for each modal slide change. Okay, this would not guarantee that the user read the content of the modal, but at least it would indicate that he passed through the content.

The problem is that the modal also had the functionality to change slides every 7 seconds by itself. In addition, the modal was infinite, that is, when it reached the end, it went back to the 1st slide. This means that if a user opened the app and went for a coffee, they would go through all the modal slides multiple times, even without actually seeing any of them. In the end, this event did not guarantee what we wanted to measure.

Error 6: not calculating the potential number of fired events

The previous learning was not the only one that our welcome modal provided us. Remember the user who opened RD Station and went for a cup of coffee? If only 500 users (between 2-3% of our base) did the same, we would have a flurry of events:

  • There would be 500 events every 7 seconds (the speed that the modal changed the slide)
  • 4,285 events in 1 minute
  • 257,100 events in 1 hour

And it actually happened, in 4 days this event was fired over TWO MILLION times. Considering that our current limit on Mixpanel is 4 million events per month, we can say that it was way off the curve. 🙂

Error 7: Not using (and planning) event properties

In Mixpanel, and in most product instrumentation tools, it is possible to send along with the event a series of properties to better identify what was done, as well as help in the segmentation of the data.

We usually send by default in all events the account ID and user ID that triggered the event. But it is possible to send any type of information as a property of the event, for example: which browser was used, which version of the functionality that triggered the event (in the case of partial rollouts) etc.

But the maxim of error 3: don’t send a bunch of data just because you can. If the data is not useful for your analysis and decisions, do not submit it as property.

Mistake 8: Creating Events Too Similar

Events that are too similar are a symptom that you are probably creating too many events. An example that occurred in Digital Results were the events that we used to monitor the use of our Methodology page

But with this event alone, we would lose the information on which Methodology page was accessed, as well as the time the user was on the page, right? Wrong. We can use the event properties, shown in the previous error, for this purpose. Just add 2 properties to the single triggered event: ContentName (ie BlogCreation) and ViewTime.

Event simplification

Error 9: Not testing the analysis you planned

Once you plan your instrumentation, following your established conventions for measuring the really useful metrics, it’s all too common to forget about them until it’s time for you to actually take the measurements. So you find out “Damn, the data I needed can’t be used the way I sent it” . And your instrumentation goes down the drain.

To avoid this, it is recommended to test the analyzes you want to do (ie cohort, funnel…) and simulate how you will receive this data to see if you can really measure what you want, how you want it and with the segmentations you want.

Error 10: not instrumenting the product 🙂

Just as you wouldn’t fix a car without knowing what’s wrong with it, you shouldn’t modify the product without important metrics to make the right decisions about the way forward. Start small by instrumenting the most important metrics and make it a culture of the product team to make decisions based on that data.