Why Offline-First Is Not Just for Bad Connections
Most developers treat offline support as an edge case for users in remote areas. That framing is wrong, and it costs you retention.
Think about where your users actually open your app. On the subway with patchy signal. In an elevator that kills connectivity mid-task. On a plane before the in-flight Wi-Fi connects. In a parking garage. At a conference center with 500 people hammering the same router. These are not exotic scenarios. They are Tuesday.
Beyond connectivity gaps, offline-first architecture gives you something even more valuable: perceived performance. When your app reads from a local database instead of waiting on a network request, it feels instant. There is no spinner. No empty state. The user's data is just there. That responsiveness compounds into trust, and trust compounds into retention. Studies on mobile app performance consistently show that perceived speed is one of the top two drivers of user satisfaction, right alongside stability.
There is also a resilience argument. Your server goes down for 20 minutes at 2pm on a Wednesday. With an online-only app, every active user hits a blank screen or a degraded error state. With an offline-first app, they keep working and the sync queue catches up automatically when your infrastructure recovers. Your server outage becomes invisible to most users.
The apps that get this right tend to be the ones people describe as "reliable." Notion, Linear, and Superhuman all invested heavily in local-first architecture early on. It is not a coincidence that those apps also have unusually high engagement and low churn. Reliability is a product feature, not just an engineering metric.
Building offline-first does add complexity. You are taking on a sync engine, a conflict resolution strategy, and a local schema that mirrors part of your server model. But you are trading a one-time engineering investment for a permanently better user experience. For most mobile products, that trade is worth it.
Local Storage Options for Mobile
The first architectural decision is where data lives on the device. Your choices matter more than most people realize, because the wrong storage layer will either cap your performance or limit your query complexity.
AsyncStorage
The default React Native key-value store. Simple, familiar, and fine for small amounts of configuration or user preferences. Not fine for structured app data. It has no transactions, no querying, and no indexing. Do not use it as your primary data layer for anything beyond a few hundred records.
MMKV
A key-value store from WeChat, ported to React Native by Marc Rousavy. It is roughly 10x faster than AsyncStorage for read and write operations. Great for settings, tokens, and small blobs of user state. Still not a relational store, so it has the same query limitations as AsyncStorage.
SQLite (via op-sqlite or expo-sqlite)
SQLite is the right choice for most apps that need structured, relational local data. It is mature, fast, and supports full SQL queries with joins, indexes, and transactions. The op-sqlite library is the current performance leader for raw SQLite access in React Native. Expo's built-in expo-sqlite is a reasonable default if you are in the managed Expo workflow. SQLite is the foundation that WatermelonDB and other higher-level libraries are built on.
WatermelonDB
WatermelonDB is a high-performance reactive database built on SQLite, purpose-built for React and React Native. It uses lazy loading, so it only pulls records into memory when they are needed. Queries are observable, meaning your UI components automatically re-render when underlying data changes. This is the library we reach for on most complex offline-first projects. The schema migration system is solid and the React hooks integration is clean.
Realm (MongoDB Realm)
An object-oriented database with built-in sync against MongoDB Atlas. If your backend is Atlas, Realm dramatically reduces the sync engineering you need to write yourself. The tradeoff is lock-in to MongoDB's cloud infrastructure and a less flexible query model compared to SQL.
For most React Native projects: use WatermelonDB for complex apps with many record types, or op-sqlite directly if you want full SQL control and are comfortable writing your own sync layer.
Designing Your Data Model for Offline
The gap between an app that technically works offline and one that works well offline is almost always in the data model. This is where the design decisions you make in week one determine how hard your sync engine is to build in month three.
What to cache locally
Not everything needs to live on the device. Static reference data (country lists, category enums, feature flags) should be cached aggressively. User-generated content that the current user owns or frequently accesses should also be local. Historical data that is rarely accessed, or records belonging to millions of other users, does not need to be pulled down wholesale. Be intentional about scope: a good offline model caches the data the user is likely to touch, not a mirror of your entire database.
Optimistic writes
Optimistic writes are the core pattern of responsive offline UIs. When the user creates a task, you write it to the local database immediately and return success to the UI. The network request to your server happens in the background. If it fails, you reconcile. This means the UI never waits for the network, and the user experience feels native-fast regardless of connectivity.
Every locally-created record needs a client-generated ID. Use UUIDs or nanoids rather than relying on server-generated auto-increment IDs. If you use server IDs, you cannot create records offline without a temporary ID scheme, which adds complexity. UUID v4 or ULID both work well and are collision-resistant at any practical scale.
Queue-based mutations
Maintain an outbox queue: a local table of pending mutations that have not yet been confirmed by the server. Each entry holds the operation type (create, update, delete), the target record ID, the payload, and a timestamp. When the device reconnects, your sync engine drains the queue in order. This pattern makes it trivial to replay operations after connectivity is restored and gives you a built-in audit trail for debugging sync issues.
Sync Strategies That Actually Work
Once you have local data and a mutation queue, you need a strategy for reconciling the local state with your server. There is no universally correct approach. The right strategy depends on your data types, your user patterns, and how much engineering complexity you can absorb.
Last-write-wins (LWW)
The simplest strategy. Every record has an updated_at timestamp. When a conflict occurs, the record with the most recent timestamp wins. This is easy to implement and works well for records that are edited by a single user across multiple devices. It breaks down when two users edit the same record simultaneously, because one person's changes will be silently discarded. For most apps with per-user data ownership, LWW is the right default.
Operational Transforms (OT)
OT is the algorithm behind Google Docs. Instead of storing the final state, you store the sequence of operations (insert character at position 5, delete characters 10 through 14) and transform operations against each other when they conflict. OT handles concurrent text editing correctly. It is also genuinely complex to implement. Unless you are building a collaborative document editor, you do not need OT.
CRDTs (Conflict-Free Replicated Data Types)
CRDTs are data structures that can be merged deterministically without conflicts. A grow-only set, a counter, or a last-write-wins register are all simple CRDTs. More sophisticated CRDTs like Automerge or Yjs handle nested document structures and support real-time collaborative editing. CRDTs are gaining traction in local-first architecture because they guarantee eventual consistency without a central coordination server. The tradeoff is that CRDT-based systems require careful data modeling and can carry significant memory and bandwidth overhead for complex document types.
Server-authoritative sync with delta updates
For most production apps, the pragmatic choice is a server-authoritative model with delta syncing. The client sends its last sync timestamp. The server returns all records modified after that timestamp. The client applies the delta and updates its local store. This is straightforward to implement, easy to reason about, and scales well. It is what WatermelonDB's sync protocol is designed to support out of the box.
Conflict Resolution Without Losing Data
Conflicts happen when the same record is modified offline on two different devices, or when a client's local mutation collides with a server update that arrived while the client was offline. How you handle conflicts determines whether users trust your app with their data.
Automatic resolution strategies
For most field types, automatic resolution is feasible. Numeric counters can be merged by summing the deltas. Boolean toggles can favor the most recent change. Append-only lists (comments, activity logs) never conflict because you are only ever adding. The key is to identify which fields in your schema are amenable to automatic merging and design your data model around those patterns where possible.
For free-text fields edited by a single user, last-write-wins is acceptable because the user is unlikely to have intentionally created two divergent versions. For collaborative free-text fields, you need OT or CRDTs.
Merge strategies at the field level
Rather than treating conflict resolution as an all-or-nothing record-level decision, apply merge strategies at the field level. A record might have 10 fields. If a conflict touches different fields on each device, you can merge trivially by taking the changed value from each device. Only when the same field is changed on both sides do you need to make a judgment call. Field-level merging reduces the actual conflict rate dramatically.
User-facing conflict UI
Sometimes you genuinely cannot merge automatically, and the right answer is to show the user what happened. Design a conflict resolution UI that shows both versions side by side and lets the user choose, or combine them manually. This surface should be rare. If your users are hitting conflict resolution screens regularly, your sync architecture has a problem. But when it happens, surfacing the conflict honestly and giving the user control is far better than silently discarding their work.
Always preserve the losing version somewhere accessible, even if just in a conflict_backup field, for at least 30 days. Users who discover a conflict after the fact will want their data back.
Background Sync and Network Detection
Knowing when the device is online is more nuanced than it sounds. A device can be connected to Wi-Fi and still have no internet access (captive portals are the classic example). A device can have 1-bar LTE that times out on every request. Your sync logic needs to handle all of these states gracefully.
React Native NetInfo
The @react-native-community/netinfo package gives you connection type (WiFi, cellular, ethernet, none), connection quality hints, and reachability information. Subscribe to connection state changes to trigger sync when the device comes back online. Do not poll. Use the event listener and react to transitions from offline to online.
One important caveat: isConnected being true does not guarantee your server is reachable. Always implement exponential backoff with jitter in your sync queue processor. Retry with delays of 1s, 2s, 4s, 8s, up to a cap of around 60 seconds. Never hammer a server when connectivity is unstable.
Background fetch
On iOS, BackgroundFetch (via expo-background-fetch or react-native-background-fetch) lets your app run a sync operation periodically even when the app is not in the foreground. The OS controls the actual frequency, typically every 15 minutes at minimum. Use this to pull fresh server data so the app feels up-to-date when the user opens it. Do not rely on background fetch for critical writes. Treat it as a freshness optimization, not a reliability mechanism.
Retry queues in practice
Your outbox queue processor should track retry count and last attempt timestamp per operation. Operations that fail due to network errors get retried with backoff. Operations that fail due to server validation errors (4xx responses) should be flagged for user attention rather than retried automatically, because retrying a malformed request will never succeed. Distinguish between transient failures and permanent failures in your error handling.
One more practical detail: limit the maximum age of items in your outbox queue. An operation that has been sitting unsynced for more than 7 days is probably stale. Surface it to the user rather than silently attempting to apply it to a server record that may have changed substantially in the interim. Good queue hygiene prevents confusing state divergence.
Testing Offline Scenarios
Offline behavior is notoriously undertested. Most teams test the happy path online and discover sync bugs in production. Do not be that team.
Simulating offline in development
In React Native, you can toggle airplane mode on a physical device or use the iOS Simulator's network conditioning feature (Hardware menu, Network Link Conditioner) to simulate 3G speeds, 100% packet loss, or complete disconnection. Android emulators have similar controls under Extended Controls. Make toggling network state part of your regular development workflow, not a quarterly QA exercise.
Charles Proxy for intercepting requests
Charles Proxy lets you intercept and manipulate HTTP traffic from your device or simulator. You can throttle bandwidth to simulate slow connections, drop specific requests to simulate partial connectivity, and return custom error responses to test your error handling paths. This is invaluable for testing "the server returned a 500 mid-sync" scenarios that are hard to reproduce otherwise. Configure your React Native app to route through Charles by setting a proxy in your Wi-Fi settings.
Flipper for local data inspection
Flipper (Meta's debugging platform for React Native) has database plugins that let you inspect your SQLite database in real time during a debug session. You can view tables, run queries, and watch records change as your sync runs. Pair this with the Network inspector to correlate local state changes with the outgoing requests that caused them.
Edge cases to test explicitly
- Create a record offline, edit it offline, reconnect. Does the server receive the final state or both mutations?
- Delete a record offline that was modified on the server while offline. Does the delete win? Should it?
- Sync interrupted halfway through. Does the app resume from where it left off or restart from scratch?
- Sync fails on record 47 of 200. Are records 1 through 46 committed? Is the failure logged clearly?
- Two devices sync the same new record simultaneously. Do you get one record or two?
Write integration tests that cover each of these cases. They are tedious to set up, but they will catch the bugs that would otherwise surface as hard-to-reproduce user complaints six months after launch.
When Offline-First Is Overkill
Offline-first is not the right architecture for every app, and pretending otherwise is how projects get over-engineered into missed deadlines.
Apps where it does not matter much
If your app is inherently real-time and stateless, offline support adds complexity with little benefit. A live sports scores app is useless offline. A video streaming app needs a very different (and much simpler) caching strategy than a data-entry app. A one-time onboarding flow does not need a sync engine. Before committing to offline-first, ask honestly: what would a user actually do with this app while offline? If the answer is "not much," the investment may not pay off.
The cost to consider
A properly built offline-first system adds roughly 30% to 50% to the initial development cost of your data layer. You are building a local schema, a sync protocol, a conflict resolution strategy, and a test suite for all of the above. On a 10-week MVP, that cost is real. On a 40-week product, it is a rounding error compared to the long-term retention benefit.
A phased approach
You do not have to ship full offline-first on day one. A pragmatic phased approach: start with optimistic UI and client-side caching of the last successful API response. Add a local database in phase two when you have validated which data the user actually interacts with. Build a full sync engine in phase three once you understand your conflict patterns. Each phase delivers incremental value without requiring you to solve every offline problem upfront.
The worst outcome is deferring offline support indefinitely because it feels too complex, then discovering in month 12 that your retention numbers are dragging because users on spotty connections have a broken experience. Build toward offline-first deliberately, even if you do not start there.
If you are trying to decide whether offline-first belongs in your product roadmap, book a free strategy call and we will walk through your specific use case, data model, and timeline to give you an honest recommendation.
Need help building this?
Our team has launched 50+ products for startups and ambitious brands. Let's talk about your project.