The postgres client/server binary protocol, implemented in TypeScript
npm install @dotdo/pg-protocolPostgreSQL wire protocol for JavaScript.
``typescript
import { serialize, Parser, messages } from '@dotdo/pg-protocol'
// Build protocol messages
const startup = serialize.startup({ user: 'postgres', database: 'mydb' })
const query = serialize.query('SELECT * FROM users')
// Parse responses
const parser = new Parser()
parser.parse(buffer, (msg) => {
if (msg.name === 'dataRow') {
console.log('Row:', msg.fields)
}
})
`
You're building database tooling. You need:
- Direct PostgreSQL communication (no client library overhead)
- Custom connection pooling (your way, not theirs)
- Protocol-level features (COPY, replication, SASL)
- Portable code (Node.js, Bun, Cloudflare Workers, browser)
The old way: Wrap an existing client. Fight its abstractions. Hope it works in your runtime.
The pg-protocol way: Raw protocol access. Build exactly what you need. Works everywhere JavaScript runs.
- Full protocol support - Startup, query, extended query, COPY, auth
- Zero dependencies - Just TypeScript
- Streaming parser - Process messages as they arrive
- Platform agnostic - ArrayBuffer-based, works anywhere
- Type-safe - Full TypeScript definitions
`bash`
npm install @dotdo/pg-protocol
`typescript
import { serialize, Parser } from '@dotdo/pg-protocol'
// Create TCP connection (platform-specific)
const socket = await connect('localhost', 5432)
// Send startup message
await socket.write(serialize.startup({
user: 'postgres',
database: 'mydb'
}))
// Handle authentication
const parser = new Parser()
socket.on('data', (buffer) => {
parser.parse(buffer, (msg) => {
switch (msg.name) {
case 'authenticationOk':
// Authenticated - send query
socket.write(serialize.query('SELECT 1'))
break
case 'dataRow':
console.log('Row:', msg.fields)
break
case 'readyForQuery':
console.log('Ready for next query')
break
case 'error':
console.error('Error:', msg.message)
break
}
})
})
`
Prepared statements with parameters:
`typescript
// Parse (prepare) the statement
await socket.write(serialize.parse({
name: 'get_user',
text: 'SELECT * FROM users WHERE id = $1',
types: [] // Let PostgreSQL infer types
}))
// Bind parameters
await socket.write(serialize.bind({
portal: '',
statement: 'get_user',
values: ['user-123']
}))
// Execute
await socket.write(serialize.execute({ rows: 0 }))
// Sync to get results
await socket.write(serialize.sync())
`
Bulk data transfer:
`typescript
// Start COPY
await socket.write(serialize.query('COPY users FROM STDIN'))
// Wait for copyInResponse
parser.parse(buffer, (msg) => {
if (msg.name === 'copyInResponse') {
// Send data chunks
const chunk = Buffer.from('1\tAlice\n2\tBob\n')
socket.write(serialize.copyData(chunk.buffer))
// Signal completion
socket.write(serialize.copyDone())
}
})
`
| Method | Description |
|--------|-------------|
| startup(opts) | Connection startup with parameters |password(pwd)
| | Password authentication |query(sql)
| | Simple query |parse(opts)
| | Prepare statement |bind(opts)
| | Bind parameters |execute(opts)
| | Execute portal |describe(opts)
| | Describe statement/portal |close(opts)
| | Close statement/portal |sync()
| | Sync after extended query |flush()
| | Flush output |end()
| | Terminate connection |copyData(data)
| | COPY data chunk |copyDone()
| | COPY complete |copyFail(msg)
| | COPY failed |cancel(pid, key)
| | Cancel running query |
| Message | Description |
|---------|-------------|
| authenticationOk | Authentication successful |authenticationCleartextPassword
| | Cleartext password requested |authenticationMD5Password
| | MD5 password requested |authenticationSASL
| | SASL auth requested |parameterStatus
| | Server parameter |backendKeyData
| | Process ID and secret key |readyForQuery
| | Ready for queries |rowDescription
| | Column metadata |dataRow
| | Row data |commandComplete
| | Command finished |error
| | Error response |notice
| | Notice/warning |copyInResponse
| | Ready for COPY data |copyOutResponse
| | Sending COPY data |copyData
| | COPY data chunk |copyDone
| | COPY complete |
`typescript`
parser.parse(buffer, (msg) => {
if (msg.name === 'authenticationCleartextPassword') {
socket.write(serialize.password('mysecretpassword'))
}
})
`typescript
import { createHash } from 'crypto'
parser.parse(buffer, (msg) => {
if (msg.name === 'authenticationMD5Password') {
const salt = msg.salt // 4 bytes
const hash = md5(md5(password + user) + salt)
socket.write(serialize.password('md5' + hash))
}
})
`
`typescript
parser.parse(buffer, (msg) => {
if (msg.name === 'authenticationSASL') {
// msg.mechanisms = ['SCRAM-SHA-256']
const clientFirst = buildClientFirst(nonce)
socket.write(serialize.sendSASLInitialResponseMessage(
'SCRAM-SHA-256',
clientFirst
))
}
if (msg.name === 'authenticationSASLContinue') {
const serverFirst = msg.data
const clientFinal = buildClientFinal(serverFirst, password)
socket.write(serialize.sendSCRAMClientFinalMessage(clientFinal))
}
if (msg.name === 'authenticationSASLFinal') {
// Verify server signature
const serverFinal = msg.data
verifyServerSignature(serverFinal)
}
})
`
Process large result sets efficiently:
`typescript
const parser = new Parser()
// Parser handles partial messages automatically
socket.on('data', (chunk) => {
// Chunk may contain partial messages
// Parser buffers and emits complete messages
parser.parse(chunk, (msg) => {
if (msg.name === 'dataRow') {
processRow(msg.fields) // Process immediately, don't buffer
}
})
})
`
`typescript
import { serialize, Parser } from '@dotdo/pg-protocol'
export default {
async fetch(request, env) {
const socket = await env.POSTGRES.connect()
await socket.write(serialize.startup({
user: env.PG_USER,
database: env.PG_DATABASE
}))
// ... handle auth and queries
}
}
`
`typescript
import { serialize, Parser } from '@dotdo/pg-protocol'
import { Socket } from 'net'
const socket = new Socket()
socket.connect(5432, 'localhost')
socket.on('connect', () => {
socket.write(serialize.startup({ user: 'postgres', database: 'mydb' }))
})
`
`typescript
import { serialize, Parser } from '@dotdo/pg-protocol'
const ws = new WebSocket('wss://proxy.example.com/pg')
const parser = new Parser()
ws.onopen = () => {
ws.send(serialize.startup({ user: 'postgres', database: 'mydb' }))
}
ws.onmessage = (event) => {
parser.parse(event.data, (msg) => {
// Handle messages
})
}
`
pg-protocol powers the postgres.do internals:
`typescript
// This is how postgres.do Durable Objects communicate
import { serialize, Parser } from '@dotdo/pg-protocol'
// Internal DO-to-DO protocol communication
const response = await doStub.fetch('https://internal/query', {
method: 'POST',
body: serialize.query('SELECT * FROM users')
})
const parser = new Parser()
parser.parse(await response.arrayBuffer(), handleMessage)
`
| Feature | Benefit |
|---------|---------|
| Zero-copy parsing | Minimal memory allocations |
| Streaming | Process as data arrives |
| ArrayBuffer | No Buffer/Uint8Array conversion |
| No dependencies | Small bundle size |
- PostgreSQL Protocol Documentation
- GitHub
- Original: brianc/node-postgres pg-protocol
- @dotdo/postgres - Full PostgreSQL client
- @dotdo/pglite - PostgreSQL in WASM
- @dotdo/pg-lake` - Distributed PostgreSQL
Adapted from Brian C's pg-protocol with modifications for universal JavaScript runtime support.
Apache-2.0