Why SQLite Is Suddenly a Production Database
For twenty years, SQLite was the database you embedded in a phone app or a desktop tool. The conventional wisdom was simple: if you had real users, you needed PostgreSQL or MySQL. SQLite was a toy. That conventional wisdom is now wrong, and it has been wrong for about three years.
Three things changed. First, NVMe storage made single-file databases absurdly fast. A modern SQLite instance can handle tens of thousands of reads per second on commodity hardware without breaking a sweat. Second, replication tooling finally got serious. Projects like libsql, LiteFS, and Cloudflare D1 brought streaming replication, read replicas, and multi-region distribution to a database that was designed to sit in one file on one disk. Third, and most importantly, the industry discovered that most applications are 95% reads. If you can put a read replica within 30 milliseconds of every user on earth, your application feels instant without any of the operational pain of a distributed SQL database.
This post is an opinionated comparison of the three most serious production SQLite options in 2026: Turso (built on libsql), LiteFS (Fly.io), and Cloudflare D1. I have shipped production workloads on all three. They are not equivalent, and picking the wrong one will hurt.
Replication Models: The Fundamental Difference
Every distributed database has to answer one question: how do writes get from the primary to the replicas? The three contenders answer it very differently, and that answer shapes everything else.
Turso (libsql) uses frame-level logical replication. libsql is a fork of SQLite that extends the write-ahead log with a streaming protocol. A primary database pushes WAL frames to replicas over the network. Replicas apply those frames and serve reads locally. Writes still flow through the primary, but you can place replicas in dozens of regions and each one sees new data within a second or two. This model is clean, understandable, and plays well with cloud primitives.
LiteFS uses FUSE-based filesystem replication. Fly built a FUSE filesystem that intercepts writes to SQLite database files, captures them as LTX transaction files, and ships them to replica nodes over a consensus-backed stream. Your application thinks it is writing to a normal file. LiteFS handles the replication transparently. This is brilliant in principle and occasionally leaky in practice. You are running a filesystem driver in production, and when it misbehaves, debugging is not fun.
Cloudflare D1 uses SQLite over Durable Objects. Each D1 database is a single SQLite instance living inside a Cloudflare Durable Object, which is a strongly consistent compute primitive pinned to one location. D1 added read replicas in 2024 via the Sessions API, but the primary is always a single Durable Object. This means D1's write model is fundamentally single-region, with Cloudflare's global network doing the work of getting reads close to users.
The practical difference: Turso gives you the most control over replica placement, LiteFS gives you the tightest integration with your application server, and D1 gives you the simplest mental model but the least flexibility.
Write Latency: Where the Differences Get Real
Writes are where distributed SQLite gets interesting, because all three products ultimately funnel writes to a single primary. The question is how far your users are from that primary and how much overhead the replication layer adds.
Turso lets you place the primary in any of its supported regions. A user in Frankfurt writing to a primary in Frankfurt sees sub-10ms write latency. A user in Sydney writing to the same primary sees 250ms or more, because the write has to cross the Pacific. Turso's embedded replica feature, which runs a local libsql database inside your application and syncs it to a remote primary, cuts read latency to zero but does nothing for writes. Write latency is a function of physical distance to the primary, full stop.
LiteFS does something clever here. It supports a feature called write forwarding: a replica node can accept a write, forward it to the primary over HTTP, wait for the primary to apply and replicate it back, and then respond to the client. This works, but it adds a round trip. Typical write latency on LiteFS with forwarding is 50 to 150ms depending on region pairs. Without forwarding, you have to route write traffic to the primary region yourself, which Fly makes easy with its replay header but which is still an application concern.
Cloudflare D1 routes all writes to the Durable Object hosting the database. That Durable Object is pinned to a single Cloudflare location (usually the region where the database was first accessed). Write latency is typically 20 to 200ms depending on where your user is relative to that location. D1 does not currently let you pick the primary region explicitly, which is fine for most apps but occasionally frustrating.
If write latency matters to you, Turso with a carefully chosen primary region wins. If write volume is low and you care more about developer experience, D1 is fine. LiteFS sits in the middle with the most control and the most complexity.
Read Latency and Edge Proximity
This is where SQLite-based databases actually shine and where they genuinely beat Postgres for most web workloads.
Turso runs read replicas in 30-plus regions. More importantly, Turso supports embedded replicas: a full libsql database file lives on disk next to your application server, and a background thread keeps it in sync with the remote primary. Reads hit local disk. Latency is measured in microseconds. This is the killer feature. A Next.js app on Vercel reading from an embedded Turso replica is faster than the same app reading from Redis in the same region, because Redis still requires a network hop.
LiteFS gives you the same pattern with a twist. Because LiteFS is a filesystem, the SQLite database file is literally on the same machine as your application. Reads are local disk reads. On a Fly.io app with LiteFS, read latency is typically under one millisecond. The catch is that you have to run your app on Fly. LiteFS is not meaningfully portable, despite the open-source license. If you move off Fly, you are rebuilding the replication layer yourself.
Cloudflare D1 with read replicas (via the Sessions API) gets you reads served from the Cloudflare location nearest to the user. Typical read latency is 5 to 30ms globally. That is excellent for a managed service with zero configuration, but it is strictly slower than an embedded replica because every read is still a network call to a Durable Object. For read-heavy workloads where every millisecond matters, Turso and LiteFS both beat D1.
For a deeper look at how this compares to Postgres-based options, see our take on Neon vs PlanetScale vs Supabase.
Pricing: Per GB, Per Request, Per Row Read
Pricing is where the marketing starts to obscure the actual cost. Here is what you will actually pay in 2026.
Turso charges per row read, per row written, and per GB stored. The free tier is generous: 1 billion row reads per month, 25 million row writes, and 9 GB of storage across up to 500 databases. The Scaler plan at $29/month gives you 100 billion row reads, 100 million writes, and 24 GB. Beyond that, you pay about $1 per billion row reads and $1 per million row writes. Storage is around $0.75/GB/month. For typical web apps, Turso is shockingly cheap. A mid-sized SaaS with a million monthly active users rarely spends more than $50 to $150 per month on Turso.
LiteFS itself is free and open source. You pay Fly.io for the compute and storage to run it. A typical LiteFS setup is two or three Fly machines with attached volumes, which comes out to $20 to $80 per month at small scale and scales linearly with your fleet. There are no per-request charges, which is either a blessing or a curse depending on your traffic shape. For workloads with huge read volume, LiteFS on Fly is cheaper than anything else. For small apps, the baseline compute cost is higher than D1 or Turso free tiers.
Cloudflare D1 pricing is integrated with Workers. The free tier includes 5 GB of storage, 5 million row reads per day, and 100,000 row writes per day. The paid plan is $5/month for Workers and charges $0.001 per 1,000 rows read, $1 per million rows written, and $0.75/GB/month for storage. D1 is the cheapest for small apps because the Workers free tier is so generous. It gets expensive fast if you have a read-heavy workload with high row counts, because per-row read pricing adds up.
Rough rule of thumb: D1 for prototypes and small apps, Turso for most production workloads, LiteFS for large read-heavy applications already on Fly.
libsql, Drizzle, and Prisma Compatibility
A database is only as good as the tools that let you work with it. In 2026, the SQLite ecosystem has mostly caught up to Postgres on ORM and migration tooling, but there are some wrinkles.
libsql is the fork of SQLite that powers Turso. It is almost fully compatible with SQLite, with additions like WAL streaming, native HTTP protocol, and extensions for vector search. Most applications written against SQLite will run on libsql without changes. The one gotcha: some obscure SQLite extensions do not ship with libsql, so check before you commit.
Drizzle ORM is the gold standard for typed SQL in TypeScript in 2026, and its SQLite support is first-class. It has dedicated drivers for libsql (Turso), better-sqlite3 (LiteFS), and D1. Migrations work the same across all three. If you are starting a new TypeScript project on any of these databases, Drizzle should be your default. The developer experience is excellent and it does not hide SQL from you the way Prisma does.
Prisma supports all three, but with caveats. Prisma's SQLite driver works with Turso via the libsql adapter, with D1 via a dedicated adapter, and with plain SQLite files over LiteFS. The catch is that Prisma's migration tooling assumes a single writable database and gets confused by replicated setups. You generally run migrations against the primary manually and let the replicas catch up. Prisma also has historically lagged on edge runtime support, though this has improved.
Raw libsql or better-sqlite3 clients work fine if you do not want an ORM. For complex queries, Kysely is a nice middle ground: query builder, full type safety, no schema lock-in. It works with all three databases.
If you need vector search (for RAG applications or semantic search), Turso has native libsql vector support. D1 does not have first-class vector support and you will typically pair it with Cloudflare Vectorize. LiteFS inherits whatever extensions you ship with your SQLite binary.
When NOT to Use SQLite in Production
I have been optimistic about SQLite in production, but there are absolutely workloads where it is the wrong tool. Recognizing these cases will save you a painful migration.
Multi-writer OLTP. This is the big one. SQLite allows exactly one writer at a time per database file. Turso, LiteFS, and D1 all inherit this limitation because they are all ultimately routing writes to a single primary. If your workload is write-heavy and latency-sensitive across multiple regions (think: a global payments system, a multi-region marketplace with simultaneous bidding, a collaborative editing platform with thousands of concurrent writers), SQLite is not the right choice. Use Postgres with Citus, use CockroachDB, use Spanner, use anything with true multi-master replication.
Complex analytical queries over huge datasets. SQLite's query planner is fine for OLTP but it is not built for analytics. If you regularly run aggregations over hundreds of millions of rows with joins across many tables, you want ClickHouse, DuckDB, BigQuery, or Snowflake. Not SQLite.
Applications that need true horizontal write scaling. If you are confident you will need to write hundreds of thousands of rows per second, do not start with SQLite. The ceiling is real. A single libsql or D1 primary tops out in the low tens of thousands of writes per second under ideal conditions, and much lower with complex transactions.
Very large single databases. SQLite can technically handle terabytes, but the operational experience is not great. Backups, migrations, and index rebuilds get painful past a few hundred GB. If you know you are heading toward a multi-TB single database, pick a tool designed for it.
Applications requiring row-level security and complex auth. Postgres has RLS. SQLite does not. You have to enforce row-level access in your application layer, which is fine but adds work.
For a broader framework on when to change databases, see how to scale a database.
How to Actually Choose
Here is the short version, in priority order.
- Pick Turso if you want the best all-around production SQLite experience in 2026. The embedded replica feature is genuinely transformative. Pricing is reasonable. Tooling is good. The team ships fast. This is my default recommendation for a new web app in 2026.
- Pick LiteFS if your application is already on Fly.io and you want filesystem-level local reads. The operational cost is low, the read performance is incredible, and the integration with Fly is tight. Do not pick LiteFS if you are not committed to Fly as your platform; the portability story is worse than it looks.
- Pick Cloudflare D1 if you are building on Cloudflare Workers and you want everything in one ecosystem. The integration with Workers, KV, R2, Durable Objects, and Vectorize is seamless. D1 is the simplest option to get started and the free tier is hard to beat. Just understand that you are committing to the Cloudflare platform end to end.
- Pick Postgres (Neon, Supabase, PlanetScale on Postgres) if you need multi-writer semantics, complex analytical queries, row-level security, or very large single databases. SQLite is not always the answer, and Postgres is not going anywhere.
One more practical tip: whichever option you pick, write your data access layer against a thin abstraction from day one. Drizzle or Kysely. Do not couple your application code to libsql-specific or D1-specific APIs if you can avoid it. The migration cost between these three products is low if you are disciplined and painful if you are not. For more on how managed database choice fits into the larger stack picture, see our Convex vs Supabase vs Appwrite comparison.
SQLite in production is not a gimmick. It is a serious architecture choice that happens to be cheaper, faster, and simpler than Postgres for the majority of web applications being built in 2026. The three tools above each represent a different bet on what production SQLite should look like. Pick the one whose bet matches yours and ship.
We help teams pick, migrate, and scale databases for production workloads every week. If you are weighing SQLite against Postgres, or trying to decide between Turso, LiteFS, and D1 for a specific project, we can help you cut through the marketing and get to a decision fast. Book a free strategy call and we will talk through your workload.
Need help building this?
Our team has launched 50+ products for startups and ambitious brands. Let's talk about your project.