Technology·13 min read

Mobile App Analytics: What to Track and Why

You cannot improve what you do not measure. Here is which mobile app metrics actually matter and how to set up analytics that drive real product decisions.

N

Nate Laquis

Founder & CEO ·

Why Most Apps Track the Wrong Metrics

Mobile app analytics dashboard showing key performance metrics

Download counts feel good. They are a number you can put in a pitch deck, celebrate on LinkedIn, and use to impress stakeholders who do not know any better. The problem is that downloads tell you almost nothing about whether your app is actually working. A hundred thousand downloads with a 5% day-30 retention rate means you have built an expensive revolving door.

The instinct to track vanity metrics is understandable. They are easy to collect, they trend upward when you run ads, and they look impressive at a glance. But they are lagging indicators of activity, not leading indicators of value. By the time your download count reveals a problem, you are already months behind on fixing it.

The apps that compound over time, the ones that grow through word of mouth and retain users for years, are built by teams that obsess over a much smaller set of harder metrics. They care about whether users come back tomorrow, whether users complete the core action the app was designed to support, and whether users are willing to pay or refer others. None of those things show up in your App Store download report.

There is also a subtler trap: tracking too many metrics at once. Some teams instrument every tap, every scroll, every screen transition, and then drown in data they never act on. More events in your analytics platform does not mean more insight. It often means less, because the signal gets buried in noise and nobody on the team agrees on which number to move.

The first step to useful analytics is being honest about what you are actually trying to learn. What does success look like for your app in 90 days? Define that outcome first, then work backward to identify the two or three metrics that are most predictive of it. Everything else is secondary. Good analytics starts with a clear question, not a long list of things you could theoretically measure.

The Five Metrics That Actually Matter

Across dozens of mobile products, the same small set of metrics consistently separates apps that grow from apps that stall. You do not need a data science team to track these. You need discipline about actually reviewing them each week.

DAU/MAU ratio (stickiness). This is daily active users divided by monthly active users, expressed as a percentage. A ratio above 20% is generally healthy. Above 50% is exceptional. If this number is low, users are not forming a habit around your app, and no amount of new-user acquisition will fix that. Fix retention before scaling spend.

Day-1, Day-7, Day-30 retention. These three numbers tell you where users fall off your retention curve. Industry benchmarks vary by category: mobile games often see Day-30 retention around 5-10%, while utility apps can sustain 25-35%. Knowing your category benchmark matters. What matters more is whether your curve is improving over time as you ship product changes.

Session length and session frequency. These two metrics together paint a picture of engagement depth. A two-minute session five times a week tells a very different story than a 30-minute session once a month, even if the total time-in-app is similar. Match these to the intended use case of your app. A meditation app should have short, frequent sessions. A productivity tool should have longer ones.

Core action conversion rate. Every app has one thing it was built to do: book a reservation, complete a workout, send a message, make a purchase. What percentage of users who open your app actually do that thing in a given session? If this rate is low, you have a friction problem somewhere between launch and core action. This is often where the most high-leverage product work lives.

Lifetime value (LTV) by acquisition channel. Not all users are equal. A user acquired through organic search often has three to four times the LTV of a user acquired through a broad paid campaign. If you are not segmenting LTV by channel, you are almost certainly over-investing in channels that look good on cost-per-install reports but produce users who churn quickly.

Choosing an Analytics Platform

The right analytics platform depends on your team size, your technical capacity, and what questions you are actually trying to answer. Here is an honest comparison of the tools most product teams evaluate.

Mixpanel is the gold standard for product analytics at growth-stage companies. Its funnel, cohort, and flow analysis tools are excellent, and the query interface is fast enough that non-technical PMs can use it independently. Pricing starts at free for up to 20 million monthly events, then scales to roughly $28/month on the Starter plan. At scale, it can get expensive fast. If you are tracking tens of millions of events per month, budget accordingly or negotiate an annual contract.

Amplitude is Mixpanel's closest competitor and arguably has stronger data governance features. Its free tier is generous (10 million monthly events), and the Scholarship plan at $49/month works well for early-stage teams. Amplitude's behavioral cohorts and predictive analytics features are more polished than Mixpanel's at the enterprise tier. If your team will eventually need advanced ML-based insights, Amplitude is worth the slightly steeper learning curve.

PostHog is the open-source option and the right choice if data privacy or self-hosting is a non-negotiable requirement. You can run PostHog entirely on your own infrastructure, which means no third-party data processing and no per-event pricing surprises. The product has matured significantly in the past two years. Feature flags, session replay, and A/B testing are all built in. The tradeoff is that setup and maintenance require more engineering effort than a SaaS solution.

Firebase Analytics is free, deeply integrated with the Google ecosystem, and a reasonable starting point for teams building on React Native or Flutter. Its event querying is less flexible than Mixpanel or Amplitude, and the BigQuery integration, which you need for serious analysis, costs extra. Use Firebase if you are pre-revenue or building a simple consumer app. Migrate to Mixpanel or Amplitude once you have paying users and real product questions to answer.

One practical note: whichever platform you choose, plan your event schema before you implement anything. Retrofitting a taxonomy onto an existing implementation is painful and produces months of inconsistent historical data.

Designing Your Event Taxonomy

Dashboard analytics showing mobile app user behavior and retention data

Your event taxonomy is the schema that defines how you describe user behavior in your analytics platform. A clean taxonomy makes your data queryable, your reports reproducible, and your team aligned. A messy one means six months from now you will have four events that all claim to track the same thing, and nobody will know which one is correct.

Start with a naming convention and enforce it. The most common standard is Object-Action: Workout Started, Profile Updated, Payment Completed. Nouns first, past-tense verbs second, title case throughout. This makes events scannable in a long list and ensures every event name unambiguously describes what happened. Avoid abbreviations, avoid snake_case mixed with camelCase, and avoid names like Button Clicked that tell you nothing about which button or why it matters.

Add event properties generously, but deliberately. Properties are what make events analyzable. A bare Workout Started event tells you that something happened. Workout Started with properties for workout type, duration target, user subscription tier, and day of week tells you enough to actually act on. Before you instrument an event, write down three questions you want to answer with it, then make sure you are capturing the properties that would answer those questions.

User properties are separate from event properties. User properties describe persistent facts about the person: subscription status, acquisition channel, account age, experiment assignments. Event properties describe the specific action. Mixing the two creates confusion. Keep them separate in your schema documentation and in your implementation.

Fight event sprawl proactively. Teams that add events freely and never prune end up with analytics platforms that have hundreds of poorly documented, partially implemented events that nobody trusts. Establish a lightweight review process: before any new event ships, it gets a name, a description, a list of properties, and a designated owner. Audit your event list quarterly and deprecate anything that has not been queried in 90 days. Fewer, better-defined events are always more valuable than comprehensive but chaotic coverage.

Funnel Analysis and Conversion Tracking

Funnel analysis is where analytics starts to drive real product decisions. A funnel is simply a sequence of steps you want users to complete, and the measurement of how many users make it from each step to the next. The value is in finding where you are losing people and understanding why.

Start by defining your most important funnel: the path from first open to first core action. For an e-commerce app, that might be: App Opened, Product Viewed, Added to Cart, Checkout Started, Purchase Completed. For a fitness app: App Opened, Onboarding Completed, First Workout Started, First Workout Completed. The exact steps matter less than making sure you have instrumented every transition point so you can see the conversion rate at each stage.

Once you have baseline conversion rates, the goal is to identify your biggest drop-off and form a specific hypothesis about why it is happening. "Checkout Started to Purchase Completed has a 40% drop-off" is an observation. "We think the drop-off is caused by users encountering unexpected shipping costs at the payment screen" is a hypothesis. The distinction matters because a hypothesis points you toward a specific fix you can test.

A/B testing and funnel analysis work best together. Most analytics platforms let you integrate experiment assignments as user properties, which means you can compare funnel conversion rates between your control and treatment groups directly in the same tool. This is cleaner than relying solely on a separate A/B testing platform for your primary success metric. Mixpanel and Amplitude both support this natively. PostHog has A/B testing built in.

A few practical pitfalls to avoid. First, do not measure funnels across too long a time window. A 30-day funnel window inflates your apparent conversion rate by including users who took a month to complete a step you expected them to finish in a day. Use a window that matches the realistic user journey. Second, segment your funnels by acquisition channel. A funnel that converts at 15% overall might convert at 30% for organic users and 8% for paid users, which is a very different story and requires a very different response.

Cohort Analysis and Retention Curves

If you read only one section of this guide, make it this one. Retention is the most important metric for any consumer app, and cohort analysis is the most powerful way to measure it. Everything else, growth, monetization, virality, compounds on top of retention. If your retention is broken, nothing else will save you.

A cohort is a group of users defined by a shared characteristic at a specific point in time. The most common cohort for retention analysis is the install cohort: all users who first opened your app in a given week. By tracking what percentage of that cohort returns to the app on day 1, day 7, day 14, and day 30, you get a retention curve that shows exactly how quickly users are abandoning your app and where the steepest drop occurs.

The shape of the curve matters as much as the numbers. An app with a retention curve that flattens out after day 14 has a core of engaged users, even if its day-30 number looks modest in absolute terms. An app whose curve keeps declining all the way to day 30 without flattening has a fundamental product-market fit problem that more features will not fix. The presence of a flattened curve, even at a low percentage, means some users have found genuine value. That is something to build on.

Cohort segmentation is where retention analysis becomes actionable. Compare retention curves across acquisition channels, device types, onboarding paths, and feature usage. You will almost always find that users who complete a specific onboarding step, or who use a particular feature in their first session, retain significantly better than those who do not. Those features and steps are your activation moments. They are the highest-leverage places to invest engineering and design effort.

Run a retention cohort report weekly, not monthly. Monthly reporting means you are always looking at data that is 30 days old when you make decisions. Weekly cohort snapshots let you detect the impact of product changes much faster and course-correct before a bad change compounds into a bad month. Both Mixpanel and Amplitude make weekly cohort retention tables easy to configure as saved reports your team can check on a regular cadence.

Privacy-Compliant Analytics

Mobile devices displaying app analytics and user engagement metrics

Privacy requirements have fundamentally changed mobile analytics over the past few years. Apple's App Tracking Transparency (ATT) framework, launched in 2021, means that on iOS you must explicitly ask users for permission to track them across apps and websites. Consent rates vary widely, but industry averages hover around 40-50%. That means you may be flying blind on half your iOS user base if you rely entirely on device-level tracking.

GDPR compliance adds another layer for any app with users in the European Union. Under GDPR, analytics data that can be tied to an identifiable individual requires a lawful basis for processing, typically either consent or legitimate interest. The safest approach is to treat all behavioral analytics data as requiring explicit consent and implement a consent management platform (CMP) that gates analytics initialization until consent is granted. OneTrust and Usercentrics are the most widely deployed CMPs for mobile. Both have SDKs for iOS and Android.

Server-side analytics is increasingly the right architectural choice for teams who need reliable data without depending on device-level tracking. Instead of firing events from the client to a third-party SDK, you fire events from your own backend when meaningful actions occur. This approach is not affected by ATT, is not blocked by ad blockers, and gives you complete control over what data is transmitted and how it is stored. The tradeoff is that purely server-side tracking cannot capture client-only events like scroll depth or UI interactions. A hybrid approach, server-side for core business events and client-side for UI analytics, is the most robust architecture.

For anonymization, consider whether you actually need to track individual users or whether aggregate cohort data would answer most of your questions. Tools like PostHog support differential privacy features that add statistical noise to query results to prevent re-identification of individual users. If you are pre-revenue and your primary questions are about funnel conversion and retention curves rather than individual user journeys, a privacy-preserving approach that you do not have to retrofit later is worth the upfront design effort.

Building a Data-Driven Product Loop

Having great analytics instrumentation is only half the job. The other half is building a team culture and process that actually uses the data to make decisions. Most teams have better data than they use. The bottleneck is rarely the tool. It is the cadence and the discipline to translate numbers into prioritized action.

Start with a weekly metrics review. Pick a fixed time, keep it to 30 minutes, and review the same core dashboard every week without exception. The dashboard should show your five key metrics (DAU/MAU, retention, session length, core action conversion, LTV by channel), the week-over-week trend for each, and any active experiment results. The goal is not to diagnose every anomaly in the meeting. It is to maintain a shared baseline of where the product stands so that everyone on the team is making decisions from the same reality.

From the weekly review, surface one or two hypotheses worth testing. A hypothesis has three parts: an observation ("Our day-7 retention dropped 4 points this week"), a suspected cause ("We think the new onboarding flow is burying the core feature"), and a proposed test ("We will A/B test reverting the onboarding change for new installs"). Writing hypotheses in this format forces rigor. It prevents the common failure mode of shipping changes based on intuition and then cherry-picking data after the fact to justify them.

Hypothesis-driven development slows you down in the short term and dramatically accelerates you in the long term. Teams that run structured experiments learn what actually moves their metrics and compound that knowledge over time. Teams that ship on instinct spend years making the same categories of mistakes with slightly different surface details.

Finally, close the loop on every test you run. Document the hypothesis, the result, and what you learned, even when the result is null or negative. Null results are some of the most valuable data you can generate because they rule out explanations and narrow the space of things worth trying. A shared experiment log, even just a shared spreadsheet, becomes one of the most valuable assets a product team can build.

If you want help designing an analytics architecture for your mobile app, or you are trying to make sense of data you already have, Book a free strategy call and we can walk through your specific situation together.

Need help building this?

Our team has launched 50+ products for startups and ambitious brands. Let's talk about your project.

mobile app analyticsapp metricsproduct analyticsMixpanelAmplitude

Ready to build your product?

Book a free 15-minute strategy call. No pitch, just clarity on your next steps.

Get Started