
Durable Objects: The Primitive AWS Doesn't Have
Cloudflare's Durable Objects give you single-threaded, globally unique compute with embedded SQLite. AWS has no equivalent. Here's how they change backend architecture.
Every few years, a cloud primitive shows up that doesn't fit neatly into existing mental models. Containers did it. Lambda did it. Durable Objects are doing it now.
If you've been building on AWS, you've internalized a specific way of thinking about stateful services: you have compute (Lambda, ECS, EC2), and you have storage (DynamoDB, RDS, S3). They're separate. Always. You coordinate between them with network calls, retries, and eventual consistency headaches.
Cloudflare's Durable Objects break that model entirely. And after running them in production for the past eight months, I think they represent a genuinely different approach to a class of problems that traditional cloud makes unnecessarily hard.
What Durable Objects actually are
Skip the marketing copy. Here's the mental model that clicked for me:
A Durable Object is a single-threaded JavaScript class instance with a globally unique ID that Cloudflare keeps alive across requests. It has its own embedded SQLite database. It runs in one location at a time. All requests to that object are serialized through that single instance.
That's it. But the implications are significant.
import { DurableObject } from "cloudflare:workers";
export class GameRoom extends DurableObject {
constructor(ctx, env) {
super(ctx, env);
// This runs once when the object is created or wakes up
ctx.storage.sql.exec(`
CREATE TABLE IF NOT EXISTS players (
id TEXT PRIMARY KEY,
name TEXT,
score INTEGER DEFAULT 0,
connected_at TEXT
)
`);
}
async onRequest(request) {
const url = new URL(request.url);
if (url.pathname === "/join") {
const { playerId, name } = await request.json();
// No race condition. Single-threaded. No distributed lock needed.
this.ctx.storage.sql.exec(
`INSERT OR REPLACE INTO players (id, name, connected_at) VALUES (?, ?, ?)`,
playerId, name, new Date().toISOString()
);
const count = this.ctx.storage.sql.exec(
`SELECT COUNT(*) as c FROM players`
).one().c;
return Response.json({ players: count });
}
}
}
No connection pooling. No ORM. No separate database service. The SQLite database lives inside the object itself, and because the object is single-threaded, you get strong consistency without any coordination protocol.
The problems this actually solves
I keep seeing Durable Objects explained through toy examples. Chat rooms. Counters. Rate limiters. Those are fine for demos, but they undersell what this primitive enables.
Here are the real production patterns where Durable Objects eliminated complexity I'd otherwise need Lambda + DynamoDB + SQS + Step Functions to handle:
Per-tenant isolated state
We run a multi-tenant SaaS where each tenant has configuration, usage meters, and feature flags. Before Durable Objects, this lived in DynamoDB with tenant-partitioned keys. Every read required a network round-trip. Every write needed conditional expressions to avoid race conditions.
Now each tenant is a Durable Object. Their config lives in SQLite. Reads are local. Writes are serialized. No races, no conditional writes, no partition hot-key problems.
export class TenantConfig extends DurableObject {
async getFeatureFlags() {
// Local SQLite read. Sub-millisecond. No network.
return this.ctx.storage.sql.exec(
`SELECT feature, enabled FROM feature_flags`
).toArray();
}
async updateUsage(metric, delta) {
// Single-threaded. No optimistic locking needed.
this.ctx.storage.sql.exec(
`UPDATE usage_meters SET value = value + ? WHERE metric = ?`,
delta, metric
);
}
}
Coordination without distributed locks
Booking systems. Inventory management. Anything where two concurrent requests can create an invalid state. On AWS, you solve this with DynamoDB conditional writes, or Redlock, or Step Functions with idempotency keys. Each solution adds latency, complexity, and failure modes.
With Durable Objects, the problem vanishes. One object per bookable resource. Requests are serialized. If two people try to book the same slot, one goes first, the other sees the updated state. No lock. No retry. No conditional expression.
Scheduled work per entity
Each Durable Object has an alarm API. Set a timer, and Cloudflare wakes the object at that time, even if it was evicted from memory.
This replaces an entire pattern I've built multiple times on AWS: EventBridge scheduled rules triggering Lambda functions that query DynamoDB to find which entities need processing. That's three services, a polling pattern, and a fan-out problem. On Durable Objects, each entity manages its own schedule.
export class Subscription extends DurableObject {
async activate(plan, nextBillingDate) {
this.ctx.storage.sql.exec(
`INSERT INTO subscriptions (plan, next_billing) VALUES (?, ?)`,
plan, nextBillingDate
);
// Set alarm for next billing cycle
await this.ctx.storage.setAlarm(new Date(nextBillingDate));
}
async alarm() {
// Cloudflare wakes this object at the scheduled time
const sub = this.ctx.storage.sql.exec(
`SELECT * FROM subscriptions LIMIT 1`
).one();
await this.processRenewal(sub);
// Schedule next cycle
const next = addMonth(sub.next_billing);
this.ctx.storage.sql.exec(
`UPDATE subscriptions SET next_billing = ?`, next
);
await this.ctx.storage.setAlarm(new Date(next));
}
}
The trade-offs nobody talks about
I'm not here to sell you Cloudflare stock. Durable Objects have real constraints, and if you don't understand them upfront, you'll build the wrong thing.
Single location, not global
A Durable Object runs in one Cloudflare data center. The first time a request creates or reaches the object, Cloudflare picks a location (usually near the first caller). After that, all requests route to that location.
If your users are in Tokyo and your Durable Object got pinned to Frankfurt, every request adds 200ms of network latency. You can hint the location using locationHint, but you can't move an existing object.
For globally distributed read-heavy workloads, you still want Workers KV or R2 in front. Durable Objects are for coordination and writes, not for serving static reads globally.
SQLite size limits
Each Durable Object's SQLite database has a 10GB storage limit (as of early 2026). That's generous for per-entity state, but it means Durable Objects aren't a replacement for your analytics database or event store. They're for operational state: user sessions, tenant config, game rooms, document state.
No cross-object transactions
There's no way to atomically update two Durable Objects. If you need to transfer credits between two users, you need to design for eventual consistency between the objects, or route both operations through a single coordinating object.
This is the biggest architectural constraint. On a traditional database, you'd wrap both writes in a transaction. Here, you think about sagas and compensating actions. It's the same trade-off as microservices, but at a finer granularity.
Cold starts exist (but they're fast)
When a Durable Object hasn't received a request in a while, Cloudflare evicts it from memory. The next request triggers a cold start: the object's constructor runs, SQLite state is loaded from disk. In my experience, this takes 10-20ms, which is an order of magnitude faster than Lambda cold starts but not zero.
For latency-sensitive paths, you can keep the object warm with periodic pings, but that defeats some of the cost efficiency.
When to use Durable Objects vs. traditional serverless
After eight months, here's my decision framework:
Use Durable Objects when:
- State is naturally partitioned by entity (user, tenant, room, document)
- You need strong consistency within an entity
- You need per-entity scheduling or timers
- You're building real-time features (WebSockets land on the right object automatically)
- Your write patterns are entity-scoped, not cross-entity
Stay on Lambda + DynamoDB when:
- You need cross-entity transactions
- Your workload is compute-heavy (ML inference, image processing)
- You need more than 128MB of memory per execution
- Your data access patterns don't fit entity partitioning
- You're already invested in the AWS ecosystem and the complexity is manageable
The hybrid approach works too. We use Durable Objects for real-time coordination and per-tenant state, and AWS for batch processing, ML pipelines, and anything that needs cross-entity queries. They're not competing architectures. They're complementary.
The SQLite angle is underrated
When Cloudflare added SQLite as a Durable Objects storage backend in 2024, it changed the calculus. The original key-value API was limiting: no indexes, no joins, no aggregate queries.
With SQLite, each Durable Object is a tiny, fully relational database that happens to have compute attached. You can write complex queries, create indexes, use transactions within the object. The point-in-time recovery API means you can roll back to any point in the last 30 days.
// Complex query inside a Durable Object. Runs locally.
const topPlayers = this.ctx.storage.sql.exec(`
SELECT p.name, p.score,
RANK() OVER (ORDER BY p.score DESC) as rank
FROM players p
WHERE p.last_active > datetime('now', '-1 hour')
ORDER BY p.score DESC
LIMIT 10
`).toArray();
Compare this to DynamoDB, where a leaderboard query requires a GSI, careful key design, and you still can't do window functions. The developer experience gap is massive.
What this means for backend architecture
Durable Objects represent a shift in how we think about the compute-storage boundary. For a decade, we've accepted that compute and storage are separate concerns with a network in between. Durable Objects collapse that boundary for entity-scoped state.
This isn't going to replace PostgreSQL for your core transactional database. It's not going to replace S3 for object storage. But for the class of problems where you need isolated, strongly consistent, per-entity state with built-in scheduling and real-time capabilities, there's nothing else in the market that does it this cleanly.
AWS will probably build something similar eventually. They usually do, two to three years after someone else proves the model. Until then, Durable Objects are one of the few genuine innovations in cloud infrastructure, and they're worth learning even if you never deploy to Cloudflare. The mental model of "addressable objects with embedded storage" will influence how you design systems everywhere.
Get new posts in your inbox
Architecture, performance, security. No spam.
Keep reading
Lambda Durable Functions Are Not Step Functions Replacements
AWS Lambda Durable Functions look like Step Functions killers. They're not. Here's when each one wins, what the checkpoint-and-replay model actually costs, and the architectural patterns I'd use in production.
MinIO Is Dead. Here's What Your Infrastructure Team Should Do Next.
60,000 GitHub stars. One billion Docker pulls. Officially archived. MinIO's five-year wind-down from Apache 2.0 to AGPL to dead is the most dramatic open-source infrastructure collapse in years. Here's the migration playbook.
Building Production-Ready MCP Servers
MCP servers are everywhere. Production-ready ones aren't. Here's the architecture I use after running MCP in real workloads: error boundaries, state isolation, security hardening, and scaling patterns that actually hold up.