MongoDB-compatible RPC server using PostgreSQL (via @dotdo/postgres) as the backend
npm install @dotdo/documentdbYour team loves the document model. Flexible schemas. Nested objects. Easy to start.
But MongoDB has problems:
- Operational overhead - Replica sets, sharding, backups... it's a full-time job.
- Query limitations - No joins. No transactions across collections. Working around them is painful.
- Cost unpredictability - Atlas pricing scales weirdly. Small projects pay enterprise prices.
You want document semantics. You don't want to give up relational power.
---
@dotdo/documentdb is MongoDB semantics backed by PostgreSQL JSONB.
Same find(). Same aggregate(). SQL joins when you need them.
``typescript
import { DocumentDBClient } from '@dotdo/documentdb'
const client = new DocumentDBClient('https://db.postgres.do/mydb')
const db = client.db('myapp')
const users = await db.collection('users').find({ active: true }).toArray()
`
Your documents are stored in PostgreSQL. You get ACID transactions, foreign keys, and full SQL when you need it.
---
`bash`
npm install @dotdo/documentdb
`typescript
import { DocumentDBClient } from '@dotdo/documentdb'
const client = new DocumentDBClient('https://db.postgres.do/mydb')
const db = client.db('myapp')
const users = db.collection('users')
`
`typescript
// Insert documents
await users.insertOne({ name: 'John', email: 'john@example.com' })
// Find with query operators
const activeUsers = await users.find({
active: true,
age: { $gte: 18 }
}).toArray()
// Aggregation pipeline
const stats = await users.aggregate([
{ $match: { active: true } },
{ $group: { _id: '$department', count: { $sum: 1 } } }
]).toArray()
`
That's it. MongoDB API with PostgreSQL reliability.
---
| MongoDB Atlas | @dotdo/documentdb |
|---------------|-------------------|
| Separate infrastructure | PostgreSQL you already know |
| No joins | Full SQL joins available |
| Collection-level transactions | ACID transactions across everything |
| Atlas pricing | Edge pricing with hibernation |
---
`typescript
const users = db.collection('users')
// Insert
await users.insertOne({ name: 'John' })
await users.insertMany([{ name: 'Jane' }, { name: 'Bob' }])
// Find
const cursor = users.find({ active: true })
const user = await users.findOne({ _id: id })
// Update
await users.updateOne({ _id: id }, { $set: { verified: true } })
await users.updateMany({ active: false }, { $set: { archived: true } })
// Delete
await users.deleteOne({ _id: id })
await users.deleteMany({ status: 'spam' })
// Count
const count = await users.countDocuments({ active: true })
`
`typescript
// Comparison
await users.find({ age: { $gt: 18, $lt: 65 } })
await users.find({ status: { $in: ['active', 'pending'] } })
await users.find({ status: { $ne: 'deleted' } })
// Logical
await users.find({
$or: [{ status: 'active' }, { role: 'admin' }]
})
await users.find({
$and: [{ age: { $gte: 18 } }, { verified: true }]
})
// Element
await users.find({ email: { $exists: true } })
// Array
await users.find({ tags: { $all: ['nodejs', 'typescript'] } })
await users.find({ scores: { $elemMatch: { $gt: 80 } } })
// Regex
await users.find({ name: { $regex: /^john/i } })
`
`typescript
const results = await users.aggregate([
// Filter documents
{ $match: { active: true } },
// Group and aggregate
{ $group: {
_id: '$department',
count: { $sum: 1 },
avgSalary: { $avg: '$salary' },
names: { $push: '$name' }
}},
// Sort results
{ $sort: { count: -1 } },
// Limit output
{ $limit: 10 },
// Project fields
{ $project: {
department: '$_id',
count: 1,
avgSalary: { $round: ['$avgSalary', 2] }
}},
// Join with another collection (PostgreSQL superpower)
{ $lookup: {
from: 'departments',
localField: '_id',
foreignField: '_id',
as: 'deptInfo'
}}
]).toArray()
`
`typescript`
await users.updateOne({ _id: id }, {
$set: { status: 'active' },
$unset: { tempField: '' },
$inc: { loginCount: 1 },
$push: { tags: 'premium' },
$pull: { tags: 'trial' },
$addToSet: { roles: 'admin' },
$currentDate: { lastModified: true }
})
---
Full MongoDB ObjectId compatibility:
`typescript
import { ObjectId, createObjectId, isValidObjectId } from '@dotdo/documentdb'
// Generate new ObjectId
const id = createObjectId()
// '507f1f77bcf86cd799439011'
// Validate ObjectId
isValidObjectId('507f1f77bcf86cd799439011') // true
// Use in queries
await users.find({ _id: new ObjectId('507f1f77bcf86cd799439011') })
`
---
Deploy as a Cloudflare Durable Object:
`typescript
import { createDocumentDBDO } from '@dotdo/documentdb'
// Export the Durable Object
export const DocumentDB = createDocumentDBDO()
export default {
fetch(request, env) {
const id = env.DOCUMENT_DB.idFromName('mydb')
const stub = env.DOCUMENT_DB.get(id)
return stub.fetch(request)
}
}
`
---
Documents stored as JSONB means you can use SQL directly:
`typescript
// MongoDB query
await users.find({ active: true, age: { $gte: 18 } })
// Or use SQL for complex operations
await pglite.query(
SELECT u.doc, COUNT(o.doc) as order_count
FROM users u
LEFT JOIN orders o ON o.doc->>'userId' = u.doc->>'_id'
WHERE u.doc @> '{"active": true}'
GROUP BY u.doc)`
Best of both worlds. Document API for speed. SQL for power.
---
| Feature | @dotdo/documentdb | MongoDB Atlas |
|---------|-------------------|---------------|
| Cache reads | FREE | Per-query cost |
| Idle databases | $0 (hibernation) | $$ (always running) |
| Per-tenant DBs | Built-in | Complex setup |
| Edge locations | 300+ | Limited regions |
---
Create a client connection.
Get a database instance.
Get a collection.
- insertOne(doc) - Insert single documentinsertMany(docs)
- - Insert multiple documentsfind(filter)
- - Query documentsfindOne(filter)
- - Find single documentupdateOne(filter, update)
- - Update single documentupdateMany(filter, update)
- - Update multiple documentsdeleteOne(filter)
- - Delete single documentdeleteMany(filter)
- - Delete multiple documentsaggregate(pipeline)
- - Run aggregation pipelinecountDocuments(filter)
- - Count matching documents
---
@dotdo/documentdb supports most MongoDB query operators by translating them to PostgreSQL JSONB operations.
| Category | Operator | Supported | Notes |
|----------|----------|-----------|-------|
| Comparison | $eq | Yes | Exact value matching |$ne
| | | Yes | Not equal matching |$gt
| | | Yes | Greater than (numeric) |$gte
| | | Yes | Greater than or equal |$lt
| | | Yes | Less than (numeric) |$lte
| | | Yes | Less than or equal |$in
| | | Yes | Match any value in array |$nin
| | | Yes | Match none in array |$and
| Logical | | Yes | Combine with AND |$or
| | | Yes | Combine with OR |$nor
| | | Yes | Match none (NOT OR) |$not
| | | Yes | Negate expression |$exists
| Element | | Yes | Field existence check |$type
| | | Yes | JSONB type checking |$regex
| String | | Yes | PostgreSQL regex (~) |$options
| | | Partial | Case-insensitive needs fix |$text
| | | No | Requires FTS setup |$all
| Array | | Yes | All elements present |$elemMatch
| | | Yes | Element condition match |$size
| | | Yes | Array length check |$where
| Evaluation | | Yes | Sandboxed via ai-evaluate |$mod
| | | Yes | Modulo operations |$expr
| | | Yes | Field comparisons |$geoWithin
| Geospatial | | No | Requires PostGIS |$geoIntersects
| | | No | Requires PostGIS |$near
| | | No | Requires PostGIS |$nearSphere
| | | No | Requires PostGIS |$bitsAllClear
| Bitwise | | No | Not implemented |$bitsAllSet
| | | No | Not implemented |$bitsAnyClear
| | | No | Not implemented |$bitsAnySet
| | | No | Not implemented |
| Operator | Supported | Notes |
|----------|-----------|-------|
| $set | Yes | Set field values |$unset
| | Yes | Remove fields |$inc
| | Yes | Increment numeric values |$mul
| | Yes | Multiply numeric values |$min
| | Yes | Update if less than |$max
| | Yes | Update if greater than |$rename
| | Yes | Rename fields |$push
| | Yes | Add to array |$pop
| | Yes | Remove from array end |$pull
| | Yes | Remove matching elements |$addToSet
| | Yes | Add unique to array |$currentDate
| | Yes | Set to current date |
| Stage | Supported | Notes |
|-------|-----------|-------|
| $match | Yes | Filter documents |$project
| | Yes | Field selection |$sort
| | Yes | Sort results |$limit
| | Yes | Limit results |$skip
| | Yes | Skip results |$count
| | Yes | Count documents |$group
| | Partial | Basic grouping |$lookup
| | Yes | Collection joins |$unwind
| | Partial | Array expansion |$addFields
| | Yes | Add computed fields |$facet
| | No | Not implemented |$bucket
| | No | Not implemented |
---
Every day with MongoDB separate from PostgreSQL:
- Two databases to manage, monitor, backup
- No joins between documents and relational data
- Duplicate data to work around limitations
- Two bills, two sets of credentials
Get documents + relational today. One database. Full power.
`bash`
npm install @dotdo/documentdb
---
The documentdb package deploys as a Cloudflare Worker to mongo.do.
1. Cloudflare Account with Workers and Durable Objects enabled
2. Wrangler CLI installed and authenticated (wrangler login)mongo.do
3. DNS Configuration: zone configured in Cloudflare
The wrangler.jsonc file configures the worker. Key settings for production:
`jsonc
{
"name": "mongo-do",
"main": "src/worker/index.ts",
"compatibility_date": "2026-01-15",
"compatibility_flags": ["nodejs_compat"],
// Durable Objects for document storage
"durable_objects": {
"bindings": [{ "name": "DOCUMENTDB_DO", "class_name": "DocumentDBDO" }]
},
// Production routes - use Workers Custom Domains
"routes": [
{ "pattern": "mongo.do/*", "custom_domain": true },
{ "pattern": "api.mongo.do/*", "custom_domain": true }
],
// Environment
"vars": {
"ENVIRONMENT": "production",
"OAUTH_ENABLED": "false"
}
}
`
Configure these DNS records in Cloudflare:
| Type | Name | Content | Proxy |
|------|------|---------|-------|
| AAAA | mongo.do | 100:: | Proxied |
| AAAA | api.mongo.do | 100:: | Proxied |
| AAAA | staging.mongo.do | 100:: | Proxied |
The 100:: address is Cloudflare's reserved address for Workers Custom Domains.
`bashNavigate to the documentdb package
cd packages/documentdb
$3
After deployment, verify the worker is responding:
`bash
Health check
curl https://mongo.do/healthExpected response:
{"status":"ok","service":"mongo.do","timestamp":"...","environment":"production"}
Root endpoint
curl https://mongo.do/Expected response:
{"name":"mongo.do","version":"0.0.1","description":"MongoDB at the edge...","status":"ok"}
`$3
| Variable | Production | Staging | Description |
|----------|------------|---------|-------------|
|
ENVIRONMENT | production | staging | Environment name |
| OAUTH_ENABLED | false | false | Enable OAuth authentication |
| OAUTH_URL | https://oauth.do | https://oauth.do | OAuth provider URL |
| DOCUMENTDB_DEFAULT_DB | documentdb | documentdb | Default database name |
| MAX_DOCUMENT_SIZE_KB | 16384 | 16384 | Max document size (16MB) |
| MAX_BATCH_SIZE | 1000 | 1000 | Max bulk operation batch |$3
For enhanced storage and caching, create these resources:
`bash
R2 bucket for document backups
wrangler r2 bucket create documentdb-storageKV namespace for caching
wrangler kv:namespace create CACHE
`Then uncomment the
r2_buckets and kv_namespaces sections in wrangler.jsonc.---
Part of the postgres.do Ecosystem
@dotdo/mongodb | MongoDB client wrapper |
| mongo.do | Managed MongoDB service |
| @dotdo/postgres | PostgreSQL server |
| postgres.do` | SQL tagged template client |---
- Documentation
- GitHub
- MongoDB Docs (API reference)
MIT