Success Metrics
Avi Siegel
•
4 min read
What are success metrics?
Success metrics are measurements that prove whether features are actually delivering their intended results. They are specifically focused at the feature level to provide concrete evidence that the work being done is making a meaningful difference for users and the business, while also not wreaking havoc elsewhere in the product.
Why do teams specify success metrics?
The primary goals of success metrics are to:
Prove features are solving problems as intended
Guide iteration and improvement decisions
Enable data-driven discussions about feature impact
How do you establish success metrics for a feature?
Successfully measuring feature impact involves four core activities: defining the right metrics, defining what success looks like, creating the necessary measurement infrastructure, and following through on monitoring and analysis.
Define metrics
Every feature needs three distinct types of metrics to properly assess its performance:
Primary metrics (value indicators): These measure whether the feature actually solved the problem it was meant to solve. For example, if you built a bulk editing feature to help users save time and be more productive, the primary metric would be the actual time saved by users, not just whether they used the feature.
Secondary metrics (usage indicators): These measure adoption and engagement patterns. While not sufficient on their own, they help you understand if users are finding and utilizing the feature. This includes metrics like percentage of target users engaging with the feature, time-to-first-use, and return usage rates.
Guardrail metrics ("don’t break stuff" indicators): These ensure the feature isn’t negatively impacting other parts of the product. For example, you might want to track conversion rates, page load times, support ticket volume, or your AWS bill.
Define success
Before you start measuring, establish clear thresholds for what constitutes success and failure for each metric. This prevents post hoc rationalization of results and ensures honest assessment of feature impact (or lack thereof).
For each metric, define:
Success threshold: The minimum improvement to consider the feature successful - e.g., "at least 25% reduction in time-to-completion"
Failure threshold: The point at which intervention is needed - e.g., "conversion rate dropped by 5% (compared to an expected 0% change)"
Note that "not successful" is not the same as "failure". A feature may not have as big an impact as you expected but still be worth keeping, whereas a feature that made matters worse may need to be removed.
Create measurement infrastructure
Before building the feature, set up the systems needed to track its success.
Data collection: Implement necessary tracking and logging
Dashboards: Create views that clearly display all relevant metrics
Baseline metrics: Gather pre-launch data for later comparison
Alerts: Configure (ideally automated) notifications for concerning trends
Monitor & analyze
Before you launch the feature, you should already have a plan for checking back in on how everything is going. For example:
Pre-launch: Collect baseline metrics for later comparison
Week 1: Daily checks on guardrail metrics to catch issues early
Weeks 2-4: Weekly deep dives on initial value and adoption
Months 2-3: Monthly check-ins on sustained impact
Quarter end: Full analysis against original goals
Who should own a feature’s success metrics?
The product manager owns primary responsibility for a feature’s success metrics. While they will collaborate with others during the process, the PM must ultimately ensure that:
The right metrics are being tracked
The metrics are actually being monitored
Actions are taken based on the results
Further, it should be noted that success metrics for features should tie back to the team’s OKRs for the quarter, which in turn should connect to the company’s North Star. This creates a clear chain of value:
Success metrics prove features are delivering intended value
OKRs show the team is moving key metrics in the right direction
The North Star confirms the company is progressing toward its vision
Best practices surrounding success metrics
Keep metrics focused on value. Resist the temptation to measure only what’s easy to track, which is often merely feature usage. Just because someone is using the feature doesn’t mean it’s useful or valuable. In other words, don’t create vanity metrics.
Automate as much as possible. Manual data collection is often performed inconsistently. Still, especially at earlier stage companies, manual data collection is better than having no data.
Connect metrics to strategy. Feature success metrics should ladder up to team OKRs and ultimately to the company’s North Star.
Set clear thresholds for success and failure. Do not enable yourself to decide after the fact that everything is fine.
Use feature flags. Make it easy to roll back problematic changes when necessary.
Actually check on your success metrics. Most teams formalize their success metrics in feature specs, and then never revisit them. Actually come back later and see how the feature performed.
Beware of correlation vs causation. Just because metrics improved after launch doesn't necessarily mean your feature caused the improvement, especially when multiple teams are launching multiple features that can have an impact on the same areas of the product.