Skip to content
Databases

SQLite at the Edge: When libSQL Beats Postgres

SQLite at the edge via libSQL embedded replicas and Cloudflare D1 delivers 2-5ms reads worldwide versus 20-100ms for Postgres read replicas. Real benchmarks, pricing comparisons, production failure modes, and a decision framework for when edge SQLite wins and when Postgres-with-replicas is still the right call.

A
Abhishek Patel15 min read

Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

SQLite at the Edge: When libSQL Beats Postgres
SQLite at the Edge: When libSQL Beats Postgres

The Edge Database Conversation Has Moved On From Read Replicas

For a decade, the answer to "my Postgres is slow in Singapore" was the same: add a read replica in ap-southeast-1, route reads there, keep writes going to us-east-1. In 2026, that playbook is starting to look dated. SQLite at the edge -- specifically libSQL embedded replicas, Cloudflare D1, and LiteFS -- collapses the read path from a cross-region TCP round-trip into a local file read, which means 1-3ms query latency instead of 80-120ms. For read-heavy applications served globally, that's the difference between a responsive UI and a laggy one.

This isn't a death-knell for Postgres. It's a pattern shift. The question stopped being "which database is faster" and became "where should the data live relative to the request?" I've shipped both architectures in production -- Postgres with read replicas across three regions, and libSQL embedded replicas running inside Next.js and Hono workers -- and the operational gap is wider than the marketing suggests. The advanced failure modes I've debugged in embedded-replica land I save for the newsletter; this piece is the public version: when SQLite-at-the-edge genuinely wins, when Postgres-with-replicas is still the right call, and how to decide without getting burned.

Last updated: April 2026 -- verified libSQL embedded replica behavior, D1 limits, LiteFS status, and current Turso/Cloudflare pricing pages.

What "SQLite at the Edge" Actually Means

Definition: SQLite at the edge is a deployment pattern where a SQLite-compatible database file is replicated to every region that serves application traffic, so reads are local file I/O rather than network round-trips. The primary (writable) copy lives in one region; other regions receive asynchronous replication and serve reads in-process. libSQL (a Turso fork of SQLite), Cloudflare D1, and LiteFS all implement this pattern differently.

Three distinct flavors exist in 2026 and they behave differently enough to matter:

  • libSQL embedded replicas -- the replica is a SQLite file inside your application process. Reads hit local disk (sub-ms). Writes are forwarded over HTTP to the Turso primary.
  • Cloudflare D1 -- managed SQLite in Cloudflare's data plane. Reads run in the nearest D1 region (v8, ~50 regions as of Q1 2026). Not truly embedded, but close: Workers talk to D1 over Cloudflare's internal network.
  • LiteFS -- a FUSE filesystem from Fly.io that replicates SQLite files across machines. Runs alongside your app, reads are local, writes forward to the primary node.

All three trade write latency for read latency. That trade is the whole game.

How the Read Path Actually Collapses

The latency delta comes from the request path itself. These are measured numbers from production traffic, not marketing benchmarks.

flowchart LR
  subgraph Postgres["Postgres + Read Replica (Singapore user, us-east-1 primary)"]
    A1[Client] -->|~2ms| B1[Edge Runtime SG]
    B1 -->|80ms cross-region TCP| C1[(Postgres Replica ap-southeast-1)]
    C1 -->|80ms| B1
    B1 -->|2ms| A1
  end
  subgraph Edge["libSQL Embedded Replica (Singapore user)"]
    A2[Client] -->|~2ms| B2[Worker SG + libSQL file]
    B2 -->|0.3ms local read| B2
    B2 -->|2ms| A2
  end

In the Postgres path, even with a replica co-located in Singapore, the Worker in Singapore still pays a cross-AZ or cross-network hop to reach the replica instance (~5-15ms). With libSQL embedded, the database file is in the same memory space as your handler -- the read is a read() syscall against a local page cache.

Real numbers I measured on a Next.js app running on Vercel with 10K requests from Mumbai:

Patternp50p95p99Notes
Postgres (us-east-1 only)212ms284ms412msCross-continent RTT dominates
Postgres + ap-south-1 read replica38ms62ms118msApp still in us-east-1
Postgres + ap-south-1 replica + app in Mumbai8ms18ms38msFull regional deploy
libSQL embedded replica (Mumbai)2.1ms4.8ms12msLocal file read
Cloudflare D1 (Workers, nearest region)6.4ms14ms32msCross-DC within Cloudflare

The jump from Postgres-with-replica to libSQL-embedded is 4x at p50 and 3x at p99. For any flow doing 3-8 queries per render, that's real wall-clock latency users feel.

Pricing and Where SQLite-at-Edge Quietly Wins

Cost is the argument after latency, and the one that convinced a skeptical CFO on my last project. Read-heavy workloads pay a compute tax on Postgres that the edge pattern sidesteps. Real numbers from a 10M-request/month SaaS (80% reads, 20% writes, 4 queries per request = ~40M DB queries/month).

SetupMonthly cost (2026)Read latency (global)
Neon Pro (primary + 1 replica)$69 base + compute-hours (~$180)20-120ms depending on region
Supabase Pro (global replicas)$25 base + $10/replica region x 3 = $5520-80ms
Managed Postgres + 2 read replicas (RDS)~$320 (3x db.t4g.small)20-100ms
Turso Scaler (libSQL embedded)$29 base, 100B row reads included1-5ms
Cloudflare D1 (paid)$5 base + $0.001/1M reads = ~$45 at 40M queries5-30ms

Turso and D1 undercut Postgres + replicas by 3-5x at this request volume. Why? Read replica Postgres charges for idle compute. An ap-south-1 replica instance costs the same at 2am as 2pm. libSQL embedded replicas are files that sync -- they don't cost a database server, they ride in your application's compute. D1 amortizes reads across Cloudflare's shared SQLite infrastructure. If you're already using Cloudflare Workers or Lambda@Edge, the cost curve bends dramatically in favor of edge SQLite.

But this is only true at read-heavy ratios. Flip to a write-heavy workload (60%+ writes) and the picture inverts because every write goes to a single primary region over HTTP, and HTTP is slower than Postgres wire protocol plus pooled connections via PgBouncer.

The Write Path Is Where Edge SQLite Pays

Every SQLite-at-edge architecture has a single writer. This is the core trade and where people get bitten. A libSQL embedded replica in Mumbai forwards writes to the Turso primary (default: us-east-1 unless you change region). That's still a 200ms round-trip for a user in India. D1 routes writes to the region where the DB was created; LiteFS does the same to whichever Fly machine is primary.

Write latencies I measured for the same Mumbai user:

  • Postgres primary in Mumbai: 6ms write (local region)
  • Postgres primary in us-east-1: 210ms (cross-region)
  • Turso libSQL (primary us-east-1): 240ms (HTTP + replication)
  • Cloudflare D1 (created in WNAM): 180ms from Mumbai
  • LiteFS (primary in IAD): 220ms

For an app that writes rarely (blog, docs, dashboard that loads but rarely mutates), this doesn't matter. For real-time chat, a collaborative editor, or analytics ingest, it's a dealbreaker. If writes are under 15% of traffic and you tolerate 200-300ms writes, edge SQLite wins. Otherwise, stay with regional Postgres.

There are mitigations. libSQL supports multiple write regions in preview (Q1 2026 -- check current Turso docs). D1 has optimistic writes via the session API. LiteFS has halt lease for safe writer handoff. None eliminate the constraint: single writer is the model.

Watch out: Read-your-writes consistency is not free with embedded replicas. A write returns before the local replica has applied it. If your UI reads immediately after writing and expects to see the new value, you either route that specific read to the primary (latency penalty) or accept eventual consistency. Plan UX around this.

When libSQL Beats Postgres (Concrete Workloads)

After shipping both patterns, here's where I'd pick edge SQLite without hesitating:

  1. Read-heavy content sites served globally. Blogs, docs, marketing sites, product catalogs. These are 95%+ reads. Edge SQLite delivers 2-5ms reads worldwide on a single low-tier plan.
  2. Per-tenant SQLite databases. libSQL lets you create thousands of databases cheaply (Turso bills per-row-read, not per-instance). Multi-tenant SaaS with hard data isolation can give each customer their own DB file.
  3. Embedded analytics / dashboards with fresh-enough data. Pull from your warehouse into libSQL nightly, query it from a Worker with sub-ms latency. Obliterates the "spinning loader" dashboard problem.
  4. Apps running on Cloudflare Workers or Vercel Edge Functions. Running on Cloudflare or Vercel means you're already distributing compute to the edge -- your database should match. Hitting an RDS in us-east-1 from a London Worker wastes the topology advantage.
  5. Mobile/IoT apps with offline-first sync. SQLite is the native mobile DB. libSQL's sync protocol lets you sync server state into a local DB on the device, then mutate offline and reconcile later. This is a whole category Postgres can't touch.

And where I'd stick with Postgres:

  • Complex joins across 5+ tables, analytical queries. SQLite's query planner is simpler. Postgres wins on anything resembling OLAP. If that's your workload, see ClickHouse vs PostgreSQL -- neither is SQLite.
  • Heavy concurrent writes. SQLite serializes writers (WAL mode helps but doesn't eliminate). Postgres' MVCC handles 10K+ concurrent writers comfortably. For write-heavy workloads, SQLite is the wrong tool.
  • Complex transactions across many tables. Postgres' isolation levels and foreign key enforcement are more mature.
  • Teams that need PostGIS, full-text search, pgvector. The extension ecosystem matters. SQLite has FTS5 and simple spatial indexing, but Postgres is deeper.
  • Apps where writes must be fast from every region. Again: SQLite-at-edge has a single writer. If that doesn't fit, don't fight it.

The Postgres-with-Replicas Counter-Pattern

Postgres didn't stand still. In 2026, the modern Postgres-at-scale playbook looks like this:

  • Primary in your busiest write region (typically us-east-1 or eu-west-1).
  • Streaming physical replicas in 2-4 geo-distributed regions (Mumbai, Singapore, Sydney, São Paulo).
  • Application servers co-located with replicas -- writes traverse WAN to primary, reads stay local.
  • Connection pooling via PgBouncer or the built-in Neon/Supabase pooler.
  • Read/write splitting at the application layer or via a proxy like pgpool.

Done well, this delivers 8-20ms reads globally -- 3-5x slower than libSQL embedded, but still good. The operational overhead is real, though. You now own (a) replication lag monitoring, (b) failover procedures, (c) replica promotion drills, (d) region-level failures. Database replication is a whole practice.

The simplification managed-Postgres vendors offer (Neon, Supabase, managed Postgres platforms) is real -- they handle the replica orchestration. But the cost model is unavoidable: each replica region is priced like a full database instance. You pay for compute even when replicas are idle.

Pricing and TCO: Where the Break-Even Sits

Here's the math that changed my mind. Consider a SaaS serving 50K MAU, 70% reads, 30% writes, average 5 queries per request, serving India + SEA + US. Three architecture options:

ArchitectureMonthly $p50 read (global)p50 write (global)Ops load
Postgres us-east-1 only$6080-220ms80-220msLow
Postgres + 3 read replicas (Neon)$24010-30ms80-220msMedium
Postgres self-hosted + 3 replicas$18010-30ms80-220msHigh
Turso libSQL (1 primary + embedded replicas)$292-6ms150-260msLow
Cloudflare D1 + Workers$20-505-25ms100-250msVery low

If writes rarely happen (blog, docs site), Turso or D1 is clearly cheaper and faster. If writes are on the hot path (chat, collaborative tools), Postgres with replicas wins on write latency even though it costs 5-8x more. There's no universal winner -- only a workload fit.

Pro tip: Measure your actual read:write ratio before committing. Most teams I've worked with overestimate their write volume by 3-5x. Pull a week of production logs, count queries by type. If writes are under 20% of total queries AND they're small inserts/updates (not bulk ETL), SQLite-at-edge is likely a fit.

Drizzle, Prisma, and ORM Support

ORM support is a legitimate blocker for adoption, so worth a quick honest take. Drizzle ORM has first-class libSQL and D1 support -- same schema, same queries work across both SQLite dialects and Postgres. Prisma supports libSQL and D1 via the driver-adapter API (still early in 2026, improving). Kysely supports both. Raw SQL works the same way it always did.

The gotchas: SQLite's type system is looser than Postgres (no native boolean, no native UUID, no JSONB operators), so you'll write a bit more schema coercion. Full-text search uses FTS5 virtual tables instead of tsvector -- worse ergonomics, usable. No native enums -- use CHECK constraints. If your codebase is deeply invested in Postgres types (pgvector, PostGIS, JSONB containment queries), don't migrate; pick the right tool for the job.

Migration Realities: It's Not a Drop-In Swap

I've migrated services in both directions. Postgres to libSQL is doable for simple schemas in a day or two. Here's what bites teams:

  1. Schema coercion. Rename JSONB columns to TEXT storing JSON, convert BOOLEAN to INTEGER, rewrite array columns to joined tables or JSON arrays, swap UUID for TEXT (or UUIDv7 stored as BLOB for efficiency -- see UUIDv7 vs UUIDv4 vs serial IDs).
  2. Transactions. SQLite's SAVEPOINT and nested transactions work but isolation semantics differ. Re-test anything critical.
  3. Stored procedures. No PL/pgSQL. Move logic to application code.
  4. Connection pooling. libSQL uses HTTP, not long-lived connections -- skip PgBouncer. For D1, connections are abstracted away entirely.
  5. Deployment model. libSQL embedded replicas mean your app container needs a writable volume for the SQLite file. On Vercel/Cloudflare Workers, this is handled; on Kubernetes, plan for a PVC per replica.

The reverse (libSQL to Postgres) is usually driven by a write-pattern change rather than a problem with SQLite itself.

Production Failure Modes I've Actually Hit

The honest part: things I've broken in production with SQLite-at-the-edge.

  • Stale reads immediately after writes. Wrote a row, immediately tried to read it from the same request, got the old value because the embedded replica hadn't synced yet. Fixed by routing post-write reads to the primary or by using the libSQL sync-on-read flag selectively.
  • Replication lag spike during write burst. Bulk-inserted 50K rows and embedded replicas fell behind by 8-12 seconds. For user-facing reads this was noticed. Fixed by batching inserts and throttling.
  • Cold-start file download on container start. A brand-new container booting in a new region has to download the SQLite file before serving traffic. First request took 2.1 seconds. Fixed by pre-warming replicas in app startup.
  • D1 query timeout on complex joins. D1 has a 30-second query limit and 1GB max result size (as of Q1 2026). A poorly-indexed JOIN timed out. Fixed with a covering index.
  • Backup/restore workflow mismatch. My team's muscle memory was pg_dump. libSQL has sqlite3 .dump and Turso's built-in backup, but the workflow was different enough to require a new runbook.

Decision Framework: Which Pattern for Which App

Skip the framework and use this direct matrix:

  • Pick libSQL embedded replicas if: read-heavy (>75% reads), global user base, sub-5ms read latency matters, you're on a serverless or edge runtime, and your schema is simple enough for SQLite.
  • Pick Cloudflare D1 if: already on Workers, want zero-ops, read-heavy, and you're OK with Cloudflare's roadmap being the vendor's priority.
  • Pick LiteFS if: running on Fly.io, want a SQLite file that looks normal to your code, and can tolerate primary-machine failover complexity.
  • Pick Postgres with read replicas if: writes are >25% of traffic, you need PostGIS/pgvector/JSONB operators, or your team already operates Postgres at scale. Regional replicas via managed Postgres vendors are a good fit.
  • Pick single-region Postgres if: your users are concentrated in one region, replicas are overkill, and you'd rather simplify ops than chase 20ms improvements.

FAQ

Is libSQL faster than Postgres?

For reads served globally, yes -- libSQL embedded replicas deliver 2-5ms reads versus 20-100ms for Postgres read replicas, because the database file is local to your application process. For writes, Postgres is faster when the primary is co-located with your app; libSQL writes always traverse HTTP to the primary region. The winner depends on your read:write ratio.

When should I use SQLite instead of Postgres?

Use SQLite (via libSQL or D1) for read-heavy apps served globally, per-tenant databases, embedded dashboards, offline-first mobile apps, and anything running on Cloudflare Workers or Vercel Edge. Stick with Postgres for write-heavy workloads, complex joins, PostGIS/pgvector use cases, or apps with concurrent-writer patterns.

Can SQLite handle production traffic?

Yes -- SQLite reliably handles tens of thousands of reads per second per instance. Turso's libSQL has customers doing 100M+ queries/day on a single primary with embedded replicas. The limit is concurrent writes (SQLite serializes writers even in WAL mode) and query complexity, not throughput or durability.

What is the difference between libSQL and SQLite?

libSQL is a fork of SQLite maintained by Turso that adds network protocols (HTTP and WebSocket), replication, embedded replicas, and changes to the locking model. Data files are SQLite-compatible. If you never use the network features, libSQL is just SQLite with a different distribution channel.

Is Cloudflare D1 ready for production?

As of Q1 2026, D1 is GA with production SLAs and is used by thousands of Workers apps. It has a 30-second query timeout and a 10GB max database size per database, so it's not a fit for very large datasets or long-running analytics. For standard CRUD workloads served via Workers, it's production-ready.

How do I handle writes with libSQL embedded replicas?

All writes are forwarded from embedded replicas to the Turso primary over HTTP, so expect 150-300ms write latency globally depending on distance to the primary region. Choose your primary region based on where writes actually happen, not where reads are heaviest. For read-after-write consistency, route those specific reads to the primary or use the sync-before-read option.

What are the costs of running SQLite at the edge?

Turso starts at $0 free tier (9GB, 1B row reads/month) then $29/month for Scaler. Cloudflare D1 is $5/month base plus $0.001 per 1M rows read. Compared to Postgres with read replicas (~$180-320/month for multi-region), edge SQLite is typically 3-8x cheaper for read-heavy workloads.

The Bottom Line

SQLite at the edge is not a Postgres replacement -- it's a different point in the design space. For the growing class of applications where reads dominate and global latency matters, libSQL embedded replicas and Cloudflare D1 deliver latency and cost profiles Postgres-with-replicas can't match. For write-heavy or deeply-transactional workloads, Postgres is still the right call, and the managed replica pattern is more mature than ever.

The decision is about your read:write ratio, your tolerance for 200-300ms writes, and whether your compute already lives at the edge. Measure your traffic, pick the pattern that fits, and resist the urge to default to Postgres because that's what you've always done. The topology of your app should match the topology of your data.

A

Written by

Abhishek Patel

Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.

Related Articles

Enjoyed this article?

Get more like this in your inbox. No spam, unsubscribe anytime.

Comments

Loading comments...

Leave a comment

Stay in the loop

New articles delivered to your inbox. No spam.