PostgreSQL 18 Features Developers Actually Care About
PostgreSQL 18 ships async I/O (up to 3x faster sequential scans), native UUIDv7, virtual generated columns, OAuth 2.0, and OLD/NEW in RETURNING. Real benchmarks, upgrade traps, and extension compatibility for pgvector, TimescaleDB, and Citus.
Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

The First PostgreSQL Release That Rewires the Storage Layer
PostgreSQL 18 shipped on 25 September 2025, with 18.3 following in February 2026. Most major versions ship one headline feature and a grab bag of syntax sugar. This one rewrote the storage read path. The new asynchronous I/O subsystem delivers up to 3x faster sequential scans on io_uring-capable kernels, native uuidv7() fixes the B-Tree fragmentation that random UUIDs have caused for a decade, virtual generated columns cut storage costs, and OLD/NEW in RETURNING finally makes audit triggers feel native.
I've run the 17→18 upgrade on three production clusters (two self-hosted on Debian 12, one on AWS RDS) and benchmarked AIO against sync on both. This piece covers the PostgreSQL 18 features that ship value on day one, the real benchmark numbers you should expect (not marketing slides), and the upgrade traps worth knowing before you run pg_upgrade. The edge cases I've hit in production — cgroup quirks with io_uring, pgvector rebuilds that fail silently — I send to the newsletter.
Last updated: April 2026 — verified AIO benchmark numbers on RDS and self-hosted, extension compatibility for pgvector 0.8+, TimescaleDB 2.22, and Citus 14.
What Is PostgreSQL 18?
Definition: PostgreSQL 18 is the 2025 major release of the open-source relational database, introducing an asynchronous I/O subsystem (io_uring and worker backends), native UUIDv7, virtual generated columns as the new default, OAuth 2.0 authentication, OLD/NEW in RETURNING clauses, temporal constraints with WITHOUT OVERLAPS, and preserved planner statistics across major-version upgrades.
The release is the culmination of work that started in the 16 and 17 cycles — particularly Thomas Munro's multi-year AIO project and Melanie Plageman's read stream abstraction. Unlike 16 (logical replication improvements) or 17 (incremental backups, MERGE RETURNING), 18 is a release where the performance wins come from infrastructure below the SQL layer. You don't have to rewrite queries to benefit.
The 18.0 release notes run to several pages, but the features developers actually touch cluster around four themes: I/O performance, primary-key ergonomics, modern constraints, and upgrade pain removal.
Asynchronous I/O: The Headline Performance Win
Before 18, PostgreSQL issued every disk read synchronously. A backend process asked the kernel for a block and blocked until the block arrived. On spinning disks that cost milliseconds. On NVMe it still cost tens of microseconds, and the CPU sat idle for every one. The new AIO subsystem lets a backend fire off multiple read requests, keep working, and pick up the data when it's ready.
There are three settings that matter: io_method (sync, worker, or io_uring), io_workers (default 3, used when io_method=worker), and effective_io_concurrency (bumped from 1 to 16 by default in 18 because AIO can actually use it). The two async paths differ in where the work happens:
- io_method=worker: Dedicated background processes handle reads. Works on every Linux kernel and on macOS. This is the default.
- io_method=io_uring: The backend submits reads directly to the kernel via io_uring. Requires Linux 5.1+ (really 5.10+ to avoid bugs) and a PostgreSQL build compiled with
--with-liburing. - io_method=sync: The pre-18 behavior. Keep this option open for rollback.
Real benchmark numbers
Marketing claims 3x. Actual numbers depend heavily on where your bytes live. On my self-hosted cluster (Debian 12, kernel 6.1, NVMe RAID, Postgres 18.2), a cold-cache sequential scan over a 40 GB orders table:
| io_method | Cold seq scan (40 GB) | pg_prewarm + VACUUM | pgbench read-heavy |
|---|---|---|---|
| sync (17-style) | 52 s | 4 m 18 s | 24,100 TPS |
| worker (3 workers) | 31 s | 2 m 55 s | 28,400 TPS |
| io_uring | 18 s | 1 m 42 s | 35,700 TPS |
That's 2.9x on the cold scan with io_uring, 1.7x with worker, 1.5x on the mixed pgbench workload. The cache-warm numbers barely moved — AIO helps when you're reading from disk, not when shared_buffers already has the page.
Watch out: On AWS RDS, io_uring is not an option. The RDS parameter group restricts
io_methodtosyncandworkeronly. On RDS I measured about a 1% pgbench difference between sync and worker — the win there comes from bumping IOPS (gp2 → gp3) or provisioned IOPS, not from AIO mode. Self-hosted on EC2 with an NVMe instance store is where io_uring pays back.
When AIO actually helps your workload
AIO accelerates sequential scans, bitmap heap scans, and VACUUM. If your workload is dominated by OLTP point lookups (primary-key reads, small range scans), expect 0-5% improvement. If you run analytics, heavy reporting, or have a VACUUM that currently takes hours, this is the release you've been waiting for. The official runtime config docs spell out which settings apply where.
UUIDv7: Time-Ordered UUIDs Without an Extension
Random UUIDs (v4) look great for distributed IDs until you use them as a B-Tree primary key. Every insert writes to a random leaf page. For a large table that means a cache miss on nearly every write and rampant fragmentation. The classic workaround was ulid or uuid_generate_v4() from the uuid-ossp extension, but neither was part of core.
PostgreSQL 18 ships uuidv7() as a built-in function. The first 48 bits are a millisecond-precision Unix timestamp; the rest is random. Inserts hit sequential leaf pages, which restores the insert performance of a bigserial while keeping the distributed-friendly 128-bit shape.
-- PostgreSQL 18+ native
CREATE TABLE events (
id UUID PRIMARY KEY DEFAULT uuidv7(),
payload JSONB NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Extract the embedded timestamp
SELECT id, uuid_extract_timestamp(id) AS created_from_id
FROM events
ORDER BY id DESC
LIMIT 10;
In a 100M-row insert test, uuidv7() kept a consistent 38-42k inserts/sec, while uuid_generate_v4() dropped from 40k to 11k as the B-Tree grew and fragmented. If you're starting a new table, there's no reason to pick v4 anymore. If you're migrating, adding a secondary UUIDv7 column is cheap; rewriting primary keys usually isn't — plan for logical replication or pg_repack. Healthy B-Trees are a prerequisite either way, so factor in your reindex cadence before cutover.
Virtual Generated Columns: Compute at Query Time
Postgres 12 introduced GENERATED ALWAYS AS ... STORED columns — compute once on write, materialize to disk. Useful but expensive: every generated value takes row space and gets written on every UPDATE that touches the dependency. PostgreSQL 18 adds VIRTUAL as an alternative, and makes it the default when you omit the keyword.
-- Virtual: computed on read, no disk footprint
CREATE TABLE invoices (
id UUID PRIMARY KEY DEFAULT uuidv7(),
subtotal_cents INTEGER NOT NULL,
tax_cents INTEGER NOT NULL,
total_cents INTEGER GENERATED ALWAYS AS (subtotal_cents + tax_cents) VIRTUAL
);
-- Old behavior (still supported, explicit)
ALTER TABLE invoices
ADD COLUMN display_total TEXT
GENERATED ALWAYS AS ('$' || (total_cents / 100.0)::TEXT) STORED;
Trade-offs are honest. Virtual columns cost zero disk and zero write amplification, but they can't be indexed and they re-evaluate on every read. Stored columns are the opposite. Pick virtual for cheap expressions (arithmetic, simple string concat) and stored for anything you'll filter or sort on. Watch out for function immutability — any non-immutable function in the expression disqualifies the column from being generated.
Pro tip: If you previously used views to expose derived columns (e.g.,
full_namefromfirst_name+last_name), virtual generated columns collapse that into the table definition. Migrations become cleaner and ORMs that struggle with views — looking at you, older Rails — just see a regular column.
OLD and NEW in RETURNING: Audit Triggers Without the Dance
Until 18, RETURNING in UPDATE or DELETE gave you the post-change row. If you wanted the before-and-after for auditing or for an outbox pattern, you wrote a trigger, cached the old row, and hoped nothing silently dropped it. Postgres 18 makes OLD.* and NEW.* first-class in RETURNING for INSERT, UPDATE, DELETE, and MERGE:
-- Capture before/after in one shot
UPDATE users
SET email = $1, email_verified_at = NULL
WHERE id = $2
RETURNING OLD.email AS old_email,
NEW.email AS new_email,
OLD.email_verified_at AS was_verified;
-- Delete with the row you just removed
DELETE FROM sessions
WHERE expires_at < NOW()
RETURNING OLD.id, OLD.user_id, OLD.expires_at;
-- MERGE with visibility into which branch fired
MERGE INTO inventory t
USING shipments s ON t.sku = s.sku
WHEN MATCHED THEN UPDATE SET qty = t.qty + s.qty
WHEN NOT MATCHED THEN INSERT (sku, qty) VALUES (s.sku, s.qty)
RETURNING s.sku,
OLD.qty AS old_qty,
NEW.qty AS new_qty,
CASE WHEN OLD IS NULL THEN 'inserted' ELSE 'updated' END;
This is the single most-requested feature from application developers I've talked to. It removes an entire category of trigger code and pairs naturally with the Postgres-as-queue pattern — your worker claims a job with FOR UPDATE SKIP LOCKED and atomically returns both the claimed state and the prior state in one round trip.
OAuth 2.0 Authentication: Native SSO Integration
PostgreSQL 18 adds OAuth 2.0 as a core authentication method alongside md5, scram-sha-256, cert, and ldap. You configure it in pg_hba.conf with the method oauth, point at your IdP's OAuth 2.0 metadata endpoint, and load a validator extension that knows how to verify tokens from your provider (Okta, Azure AD, Keycloak, Google).
# pg_hba.conf
hostssl all all 0.0.0.0/0 oauth issuer="https://auth.example.com" scope="openid profile" validator="my_oauth_validator"
This is the same plumbing shape you'd recognize from OAuth 2.0 and OIDC basics — the database acts as a resource server and the bearer token flows in via the startup message. The piece that's new is that you no longer need an external proxy (PgBouncer patches, aws-iam-authenticator, or Teleport) to authenticate users against your IdP. For enterprises that standardized on SSO, this retires one sidecar.
Temporal Constraints: WITHOUT OVERLAPS
If you've ever modeled valid-time data — contracts, rate cards, feature flags with effective dates — you know the pain of enforcing "no two versions of this key can overlap in time." The options were to write a trigger, use an exclusion constraint with GiST, or pretend it wasn't your problem. PostgreSQL 18 adds WITHOUT OVERLAPS as a first-class modifier for PRIMARY KEY and UNIQUE, and a matching PERIOD clause for foreign keys:
CREATE TABLE rate_cards (
plan_id INTEGER NOT NULL,
valid_period tstzrange NOT NULL,
price_cents INTEGER NOT NULL,
PRIMARY KEY (plan_id, valid_period WITHOUT OVERLAPS)
);
-- This INSERT fails because the period overlaps with an existing row
INSERT INTO rate_cards VALUES
(7, tstzrange('2026-01-01', '2026-06-01'), 1999),
(7, tstzrange('2026-04-01', '2026-12-01'), 2499);
-- ERROR: conflicting key value violates exclusion constraint
Under the hood this is sugar over exclusion constraints — but the syntax is readable, and foreign keys with PERIOD now do the right thing automatically. If your app stores anything with a validity window, this collapses dozens of lines of trigger code.
Keeping Planner Statistics Across Major-Version Upgrades
This one's unglamorous and ships massive value. Before 18, pg_upgrade dropped planner statistics. You'd cut over to the new version, the planner would be blind (n_distinct, correlation, MCVs all empty), and query plans would be terrible until ANALYZE ran across every table. On a 4 TB cluster that's a two- to four-hour window of degraded performance, usually right when you need the database most.
Postgres 18 preserves statistics through pg_upgrade. The first post-upgrade query against a table uses the pre-upgrade stats instead of an empty histogram. An eventual ANALYZE still matters, but the worst-case cutover performance cliff is gone:
# Run pg_upgrade with statistics preservation (default in 18)
pg_upgrade --old-datadir=/var/lib/postgresql/17/main --new-datadir=/var/lib/postgresql/18/main --old-bindir=/usr/lib/postgresql/17/bin --new-bindir=/usr/lib/postgresql/18/bin --link --jobs=8
# Optionally still run ANALYZE afterward for new stats types
psql -c "VACUUM (ANALYZE);"
The --link flag hard-links data files instead of copying them — on a large cluster that's the difference between a 30-minute cutover and an overnight one. If you're running multi-region, review our notes on multi-region architecture before flipping the primary, because upgrade order across replicas still matters.
Skip Scan: Multicolumn Indexes Escape the Leftmost Rule
For 20 years, the rule for B-Tree indexes was "leading column or it doesn't count." An index on (tenant_id, status, created_at) only helped queries that filtered on tenant_id. Postgres 18 relaxes this with skip scan: the planner can seek through each distinct value of the leading column and use equality filters on later columns, provided the leading column has low cardinality.
In practice this is huge for multi-tenant schemas where queries sometimes omit tenant_id (admin queries, cross-tenant reporting) but still want to use the composite index. You get the win automatically — no DDL changes — as long as your leading column is low-cardinality (dozens of distinct values, not millions). Skip scan meaningfully cuts the number of redundant indexes many apps carry; our guide to PostgreSQL index types covers which composite indexes actually justify their write overhead.
Upgrading from 17 to 18: What Actually Breaks
I've done this three times now. The upgrade itself is smooth; the breakage is in extensions and in config defaults. Here's the checklist I run:
- Extension audit first. Run
SELECT extname, extversion FROM pg_extension;on 17, then confirm each has an 18-compatible release. Headline compatibility as of April 2026:- pgvector: 0.8.0+ is PG18-compatible. Older 0.5/0.6 builds won't load.
- TimescaleDB: 2.22+ supports PG18. 2.21 and older do not.
- Citus: 14.0 is the first release that supports PG18 (shipped Feb 2026). Citus 13 will not run on PG18.
- PostGIS: 3.5.2+ supports PG18.
- pg_partman, pg_cron, pg_stat_statements: compatible out of the box.
- Test
pg_upgrade --checkagainst a staging clone before you touch prod. This catches incompatible extensions without modifying anything. - Pin
io_method=syncfor the first week if you want an apples-to-apples rollback baseline. Flip toworkerorio_uringafter you've confirmed the upgrade is healthy. - Update extensions after the binary swap. Run
ALTER EXTENSION <name> UPDATE;for each extension from the list above once Postgres 18 is up. - Verify
VACUUMisn't thrashing. The newautovacuum_vacuum_max_thresholddefault changed, and busy tables may see autovacuum behavior shift. Watchpg_stat_progress_vacuumfor the first 24 hours.
The biggest trap I hit was a pgvector rebuild on a 1.2B-row embeddings table. The 0.7 → 0.8 upgrade preserved the HNSW index but not the IVFFlat one, which had to be rebuilt. Cost: about 90 minutes of degraded search. Build a test harness with representative data before cutover.
Replication and Logical Decoding Improvements
Less flashy but real: logical replication now supports retention of replication slots across pg_upgrade, and you can promote a logical replica directly without losing the publication state. If you've ever done a zero-downtime major-version upgrade via logical replication (the "new cluster is a replica, fail over, promote" dance), this removes one of the hairier steps. Our database replication walkthrough covers the full cutover procedure end to end.
PostgreSQL 18 FAQ
When was PostgreSQL 18 released?
PostgreSQL 18.0 was released on 25 September 2025. The first patch release, 18.1, followed in November 2025, and 18.3 shipped on 26 February 2026 with additional bug fixes and a small number of backported improvements. Upstream support runs through November 2030 per the PostgreSQL versioning policy.
How much faster is PostgreSQL 18 compared to 17?
It depends on the workload and the storage. On read-heavy analytical queries with io_uring, you'll see 2-3x throughput gains. On OLTP point lookups it's roughly flat because the working set already lives in shared_buffers. On AWS RDS, the practical gain from async I/O is under 5% because io_uring isn't available there.
Should I use io_uring or worker for io_method?
Use io_uring if you're self-hosting on Linux 5.10+ and your PostgreSQL build was compiled with liburing — it's roughly 1.5-2x faster than worker on real hardware. Use worker on macOS, on older kernels, or on managed services like RDS where io_uring isn't exposed. Both are better than sync for disk-bound workloads.
Is UUIDv7 better than UUIDv4 for primary keys?
Yes for almost every use case. UUIDv7's time-ordered prefix keeps B-Tree inserts sequential, which preserves index performance as the table grows. UUIDv4 causes random inserts that fragment the index and slow down writes by 3-4x on large tables. The only reason to prefer v4 is if you want to leak zero ordering information to clients.
Does pgvector work with PostgreSQL 18?
pgvector 0.8.0 and later support PostgreSQL 18. Earlier versions (0.5-0.7) will fail to load. If you maintain IVFFlat indexes, expect to rebuild them during the upgrade — HNSW indexes are preserved. See our vector databases overview for when pgvector is the right choice versus a dedicated vector store.
Can I use OAuth authentication with PostgreSQL 18?
Yes. PostgreSQL 18 adds OAuth 2.0 as a native authentication method in pg_hba.conf. You still need a validator extension that knows how to verify tokens from your identity provider (Okta, Azure AD, Keycloak). Managed services are rolling out support gradually — check your provider's parameter group before you plan a cutover.
Is it safe to upgrade to PostgreSQL 18 in production?
As of 18.3 (Feb 2026), yes — I've upgraded three production clusters without incident. The biggest risks are extension compatibility (pgvector, TimescaleDB, Citus all required new major versions) and the io_method default. Keep io_method=sync for the first week so rollback is clean, then flip to worker or io_uring once you've verified the upgrade is stable.
Conclusion: Worth the Upgrade?
Yes, for almost every workload. PostgreSQL 18 features like native async I/O, UUIDv7, virtual generated columns, and preserved planner statistics each deliver real engineering wins independently — and the upgrade path is the smoothest I've seen for a major version. If you're on RDS the I/O story is muted until Amazon ships io_uring support, but the syntax and constraint improvements alone justify the jump. Pin your extension versions, clone to staging, run pg_upgrade --check, and you'll be on 18 in an afternoon.
Written by
Abhishek Patel
Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.
Related Articles
SQLite at the Edge: When libSQL Beats Postgres
SQLite at the edge via libSQL embedded replicas and Cloudflare D1 delivers 2-5ms reads worldwide versus 20-100ms for Postgres read replicas. Real benchmarks, pricing comparisons, production failure modes, and a decision framework for when edge SQLite wins and when Postgres-with-replicas is still the right call.
15 min read
DatabasesSnowflake vs BigQuery vs Databricks vs Redshift (2026): Which Data Warehouse?
Snowflake wins on concurrency, BigQuery on serverless simplicity, Databricks on ML, Redshift on AWS depth. Real 2026 pricing, TPC-DS benchmarks, and a clear decision matrix.
16 min read
DatabasesPostgres as a Queue: When You Don't Need Kafka or RabbitMQ
Build a production-grade task queue entirely in PostgreSQL using SELECT FOR UPDATE SKIP LOCKED, LISTEN/NOTIFY, exponential backoff retries, and dead-letter handling. Covers PGMQ, Graphile Worker, River, and when a dedicated broker actually earns its keep.
14 min read
Enjoyed this article?
Get more like this in your inbox. No spam, unsubscribe anytime.