AI & Strategy·14 min read

Your App Launched. Now What? The First 90 Days Post-Launch Playbook

Launching your app is the easy part. The first 90 days determine whether it grows or dies quietly. Here is the exact playbook we use with every product we ship.

N

Nate Laquis

Founder & CEO ·

The Real Work Starts After You Ship

Most founders treat launch day as the finish line. They spend months building, then hit publish and wait for users to show up. Three weeks later, traffic is flat, activation is low, and the team is demoralized. This is not a product problem. It is a post-launch strategy problem.

The first 90 days after launch are the highest-leverage period your product will ever see. Your team is still deeply familiar with every part of the codebase. Users are most forgiving when the product is new. Patterns in your data are easy to see because you have not layered years of features on top of them. Decisions made in this window compound over months and years.

This playbook breaks the first 90 days into three distinct phases, each with a different goal, a different set of metrics, and a different set of tools. Follow it and you will enter month four with a clear growth model. Ignore it and you will spend the next year guessing.

Team celebrating an app launch and planning post-launch growth strategy

Days 1 to 14: Stabilize, Monitor, and Triage

Your only job in the first two weeks is to make the product work reliably for the users you have. Not to grow. Not to build new features. Stabilize first.

Set Up Error Tracking Immediately

If you do not already have Sentry installed, stop reading and install it right now. Sentry captures every unhandled exception, JavaScript error, and server crash in real time. The free tier covers up to 5,000 errors per month. Paid plans start at $26/month and are worth every dollar. Without it, your users are silently experiencing crashes that you will never hear about because most users do not report bugs. They just leave.

Configure Sentry to alert you via Slack or email for any new error type the moment it appears. Set up BetterStack (formerly Logtail and Better Uptime) for uptime monitoring. BetterStack pings your app every 30 seconds from multiple regions and pages you within 60 seconds of downtime. Pricing starts at $24/month. You should know about outages before your users tweet about them.

Establish Your Bug Triage Process

Every bug report that comes in during weeks one and two needs a severity label within 24 hours. Use three tiers: P1 (blocking core functionality, fix within 24 hours), P2 (degraded experience, fix within 72 hours), P3 (cosmetic or edge case, fix in next release cycle). Assign each bug to a specific engineer, not to "the team." Shared ownership means no ownership.

Create a public-facing status page using BetterStack's status page feature (included in their $24/month plan). Post updates there during any incident, even brief ones. Users who can see that you are actively working on a problem are far more forgiving than users who see nothing. Transparency early builds trust that sustains you through future problems.

Days 1 to 14 Checklist

  • Sentry installed and alerting: Zero untracked errors in production.
  • BetterStack uptime monitoring active: Alerts fire to at least two engineers.
  • Status page live: Linked from your app's footer and help docs.
  • Bug triage process documented: Every team member knows the severity criteria.
  • On-call rotation established: Someone is always responsible for production issues.
  • Performance baseline captured: Record p95 response times for your five most-used API endpoints. This becomes your benchmark for regressions.

Days 15 to 30: Find Your Activation Metric

Once the product is stable, shift your focus to understanding why users stay or leave. The single most important thing you can figure out in weeks three and four is your activation metric: the specific action that separates users who stick around from users who churn.

Every product has one. For Slack it was sending 2,000 messages. For Dropbox it was putting at least one file in the Dropbox folder. For your product, you need to find it empirically, not guess at it.

Install Product Analytics

PostHog is the right choice for most early-stage products. It is open-source, self-hostable, and the cloud version is free up to 1 million events per month. PostHog gives you session recordings, funnel analysis, feature flags, and user cohorts in one tool. The session recording feature alone is worth it. Watching real users navigate your app for the first time will destroy every assumption you made during development.

If your product has a strong B2C analytics use case or you need more robust SQL-based analysis, Mixpanel is the alternative. Mixpanel's free tier covers 20 million events per month and its funnel and retention reports are genuinely best-in-class. The main drawback is cost at scale: Mixpanel's Growth plan starts at $28/month but jumps quickly based on event volume.

Finding Your Activation Metric

To find your activation metric, build a retention cohort report in PostHog or Mixpanel. Group users by their first week of signup. Then look at which actions correlate with users still being active 30 days later. You are looking for the action where users who complete it have meaningfully higher 30-day retention than users who do not.

Common patterns: completing an onboarding step, connecting an integration, inviting a teammate, creating a piece of content, or completing one full workflow end to end. Once you identify it, that action becomes your North Star for the next phase. Every product decision should be evaluated by the question: does this help more users reach the activation moment faster?

Gather Qualitative Feedback

Analytics tells you what users are doing. Interviews tell you why. Book 10 to 15 user interviews in weeks three and four. Talk to users who activated and users who signed up but never came back. Ask open-ended questions: what made them try your product, what problem they were hoping to solve, where they got confused, and what they expected to happen that did not.

You will hear the same three to five problems repeatedly. Those are the problems worth solving. The issues only one person mentions are probably edge cases. The issues five out of ten people mention are your roadmap.

Product analytics dashboard showing user activation metrics and retention cohorts

Metrics That Actually Matter in the First 30 Days

There are hundreds of metrics you could track. Here are the ones that matter in the first 30 days and the benchmarks to hold yourself to.

Weekly Active Users (WAU)

Track WAU from day one. This is your primary growth metric and the denominator for all your other rates. Do not track monthly active users early on because monthly data takes too long to give you feedback. Weekly resolution lets you see the impact of changes within a week rather than waiting a month. A healthy WAU growth rate for an early-stage product is 10 to 15% week over week. Below 5% is a signal to investigate.

Activation Rate

The percentage of new signups who complete your activation action within their first session or first 7 days. Benchmarks vary significantly by product type, but a rough starting point: under 20% is a serious problem, 20 to 40% is typical for products still finding their footing, 40 to 60% is good, and above 60% means your onboarding is doing its job. If your activation rate is low, the problem is almost always friction in the onboarding flow, not the product itself.

D1, D7, and D30 Retention

Day 1 retention is the percentage of users who return the day after signup. Day 7 and Day 30 follow the same pattern at longer windows. These numbers tell you whether users find value quickly and whether that value is durable enough to bring them back.

Median D1 retention across B2B SaaS products is around 25 to 35%. Median D30 is 10 to 20%. If your D1 is below 20%, your first-session experience is failing. If D7 is below D1 by more than 50%, users are not building a habit. Consumer products have lower benchmarks generally, but the same relative patterns apply.

Net Promoter Score (NPS)

Send an NPS survey at day 14 after signup, not at day 3 when users have barely tried the product. Ask: "How likely are you to recommend this product to a colleague, on a scale of 0 to 10?" Anything below 30 NPS means the product experience is not creating advocates. Anything above 50 means you have genuine word-of-mouth potential. Track NPS as a trend over time more than as an absolute number.

Days 31 to 90: Growth Experiments and Funnel Optimization

With a stable product, a defined activation metric, and 30 days of real user data, you are now ready to grow intentionally. The goal of days 31 to 90 is to find one or two growth levers that reliably increase your activation rate and retention, then double down on them.

Build a Growth Experiment Backlog

Start by listing every place in your user funnel where users drop off. Use your PostHog or Mixpanel funnels to identify the exact steps where the most users fall out. Then generate hypotheses for why they drop and what you could change to fix it. Each hypothesis becomes an experiment.

A good experiment has three components: a clear hypothesis (if we do X, metric Y will increase because Z), a defined success metric (activation rate, D7 retention, or conversion step), and a minimum duration (run the test for at least 7 days before reading results, 14 days is better). PostHog has built-in A/B testing via feature flags, which makes running experiments without a separate tool straightforward.

Onboarding Optimization

The highest-leverage place to experiment first is almost always onboarding. Specifically: the first 10 minutes of a new user's experience. Improvements here compound across every user who signs up going forward. Common high-impact changes include reducing the number of steps to reach the activation moment, adding contextual tooltips that explain why each step matters, pre-populating sample data so users see value before they input their own, and sending a single targeted email at the 24-hour mark to users who signed up but did not activate.

A 10-percentage-point improvement in activation rate, say from 30% to 40%, means that for every 100 new users you acquire, 10 more become active users. At scale, this multiplier on your acquisition spend changes your entire unit economics.

Retention Loops

Retention does not happen by accident. It is built through deliberate re-engagement mechanisms. Identify the natural "return trigger" for your product: the event that gives a user a reason to come back. For a project management tool, it is a teammate commenting on your task. For an analytics tool, it is a weekly email digest showing new data. For a CRM, it is a follow-up reminder.

Build that trigger into the product explicitly. Weekly digest emails sent by products like PostHog and Mixpanel are not random: they are retention tools. Even a simple weekly summary email showing users what happened since their last visit can meaningfully improve D30 retention. The benchmark to aim for: a 5-percentage-point improvement in D30 retention over 60 days of experimentation is a realistic and valuable goal.

Startup team reviewing growth experiment results and funnel optimization metrics on a whiteboard

Channel Experiments: Finding Where Your Users Come From

At some point in the 31 to 90 day window, you need to start seriously testing acquisition channels. Not because growth requires a massive marketing budget. Because every day you spend acquiring users from an expensive channel when a cheaper one exists is money wasted on unit economics that will never work.

Start With What You Can Measure

UTM parameters are not optional. Every link you share anywhere, in emails, on social, in partner content, in ads, needs UTM tags so PostHog or Mixpanel can attribute signups back to their source. This sounds tedious to set up and you will thank yourself in week 10 when you can see exactly which channels produced your activated users versus your churned users.

The distinction matters because channel quality varies significantly. Users from organic search often activate at 40 to 50% rates because they were searching for a solution to a specific problem. Users from paid social often activate at 15 to 25% because they saw an ad and clicked out of curiosity rather than intent. Knowing this changes how much you bid and who you target.

Channel Experiments to Run

Test at least three channels in the 31 to 90 day window. Spend at least $500 and four weeks on each before declaring it does not work. Common options for early-stage products: content marketing and SEO (slow but compounds over time, target long-tail search terms your users are actually typing), paid search via Google Ads (faster feedback, set a $50/day cap while testing), community and partnership channels (integrations with adjacent tools, guest posts, Slack communities), and cold outbound if you are selling to businesses.

The goal is not to find your perfect channel in 90 days. The goal is to eliminate the channels that clearly do not work and double the budget on the one or two that show a CAC (customer acquisition cost) that makes sense at your price point.

The 90-Day Review: What to Measure Before You Plan Month 4

At the end of day 90, before you plan the next quarter, do a structured review of what you know. This review should produce three things: a clear picture of what is working, a clear picture of what is not, and a decision on the one metric you are going to prioritize for the next 90 days.

Metrics to Review

  • WAU growth rate: Is it accelerating, flat, or declining? Flat WAU with growing signups means your churn is matching acquisition. That is a retention problem, not a growth problem.
  • Activation rate trend: Did it improve over 90 days? By how much? If it did not move, your onboarding experiments either did not ship or did not work.
  • D30 retention: This is now a real number, not a projection. If it is below 15% for a B2B SaaS product, retention is your biggest problem and should dominate the next quarter.
  • NPS trend: Is it improving? Flat NPS after 90 days usually means you are not solving the core problem any better than you were at launch.
  • CAC by channel: What did it cost to acquire an activated user from each channel you tested? Which channel had the best quality users (highest activation and retention rates)?

Making Decisions From the Data

One mistake founders make in this review is trying to fix everything at once. Pick one number to improve in the next 90 days. If retention is your biggest problem, the entire team is focused on retention for three months. If activation rate is healthy but acquisition cost is too high, the entire team is focused on channel efficiency. Diffused effort across multiple metrics produces mediocre results across all of them. Concentrated effort on one produces breakthrough results on the one that matters most.

The best-performing products we have worked with share one characteristic: the founding team is obsessive about one metric at a time, and they do not move on until that metric is genuinely healthy. This discipline is harder than it sounds when you have a backlog of features to build, but it is what separates products that compound from products that plateau.

If you want help building your post-launch analytics stack, running your first growth experiments, or structuring your 90-day review, we have done this with dozens of products across B2B SaaS, consumer apps, and marketplaces. Book a free strategy call and we can walk through your specific numbers and tell you exactly where to focus.

Need help building this?

Our team has launched 50+ products for startups and ambitious brands. Let's talk about your project.

post-launch playbookapp launch strategyuser retentionproduct analyticsstartup growth

Ready to build your product?

Book a free 15-minute strategy call. No pitch, just clarity on your next steps.

Get Started