A secure backup and restore utility that creates encrypted, compressed archives of files and directories. Supports multiple encryption and compression algorithms, with the ability to write backups to multiple destinations including local storage and S3.
npm install bunchiveA secure backup and restore utility that creates encrypted, compressed archives of files and directories. Supports multiple encryption and compression algorithms, with the ability to write backups to multiple destinations including local storage and S3.
- Encryption: AES-128-CTR, AES-192-CTR, or AES-256-CTR (default: AES-256-CTR)
- Compression: zstd, gzip, brotli, or deflate (default: zstd)
- Checksum verification: HMAC-SHA256 checksums stored alongside backups for integrity verification (uses encryption key to prevent manipulation)
- Multiple destinations: Write backups to multiple locations simultaneously
- S3 support: Direct backup to S3 buckets
- Glob patterns: Flexible file matching using glob patterns
- Scheduled backups: Run backups automatically on a schedule using cron patterns
- Sliding backup window: Automatically keep only the specified number of backups per destination
- Timestamp formats: Choose between ISO, Unix timestamp, or no timestamp in backup filenames
You can install the tool using bunx:
``bash`
bunx bunchive backup -k
You can install the tool using bun:
`bash`
bun install bunchive
After installation, you can use APIs programmatically:
`ts
import { backup, generateKey } from "bunchive";
const key = await generateKey();
const checksum = await backup({
patterns: ["src/*/.ts"],
outputPaths: ["./backup"],
key: key,
});
`
You can also use the CLI:
`bash`
bun run bunchive backup -k
You can also use the tool using Docker:
`bash`
docker run --rm -v $(pwd):/data -w /data -e TZ=
Generate a new encryption key (32 bytes, hex-encoded):
`bash`
bun run bu key
Save this key securely - you'll need it to restore backups.
Backup files matching glob patterns to one or more destinations:
`bashBasic backup with key provided via command line
bun run bu backup -k
Options:
-
-k, --key: Encryption key (hex-encoded). Can also use BACKUP_KEY environment variable.
- -d, --destinations: Target location(s) for backup (can specify multiple times). Can also use BACKUP_DESTINATIONS environment variable (semicolon-separated). Defaults to ./backup.
- -e, --encryption: Encryption algorithm (aes-128-ctr, aes-192-ctr, aes-256-ctr). Defaults to aes-256-ctr.
- -c, --compression: Compression algorithm (zstd, gzip, brotli, deflate). Defaults to zstd.
- -t, --timestamp: Timestamp format for backup filenames (iso, unix, none). Defaults to iso. Can also use BACKUP_FORMAT environment variable.
- -n, --count: Number of backups to keep per destination (sliding window). Requires timestamp format to be enabled (cannot use with -t none). Can also use BACKUP_COUNT environment variable.
- -s, --schedule: Cron pattern for scheduled backups. When provided, the script runs continuously and executes backups on schedule. Can also use BACKUP_SCHEDULE environment variable.
- --no-checksum: Disable checksum generation. Can also use BACKUP_CHECKSUM=false environment variable.Sources:
- Provide glob patterns as positional arguments
- Can also use
BACKUP_PATTERNS environment variable (semicolon-separated)$3
Restore files from a backup archive:
`bash
Basic restore
bun run bu restore -k backup/backup_2026-1-3T11-46-21.tar.zstd.cryptRestore to custom output directory
bun run bu restore -k -o ./restored backup/backup_2026-1-3T11-46-21.tar.zstd.cryptRestore from S3
bun run bu restore -k s3://my-bucket/backups/backup_2026-1-3T11-46-21.tar.zstd.cryptRestore backup with Unix timestamp format
bun run bu restore -k backup/backup_1704304150.tar.zstd.cryptRestore backup without timestamp
bun run bu restore -k backup/backup.tar.zstd.cryptSpecify encryption/compression if different from defaults
bun run bu restore -k -e aes-128-ctr -c gzip backup/backup_2026-1-3T11-46-21.tar.gzip.cryptSkip checksum verification
bun run bu restore -k --no-verify-checksum backup/backup_2026-1-3T11-46-21.tar.zstd.crypt
`Options:
-
-k, --key: Encryption key (hex-encoded). Can also use BACKUP_KEY environment variable.
- -o, --output: Output directory for restored files. Defaults to ./restored.
- -e, --encryption: Encryption algorithm used in the backup. Defaults to aes-256-ctr.
- -c, --compression: Compression algorithm used in the backup. Defaults to zstd.
- --verify-checksum: Verify checksum if .sha256 file exists. Defaults to true. Set to false to skip verification.$3
You can schedule backups to run automatically using cron patterns. When a schedule is provided, the script runs continuously and executes backups according to the cron pattern.
Cron Pattern Format:
`
┌──────────────── second (0 - 59) (optional)
│ ┌────────────── minute (0 - 59)
│ │ ┌──────────── hour (0 - 23)
│ │ │ ┌────────── day of month (1 - 31)
│ │ │ │ ┌──────── month (1 - 12)
│ │ │ │ │ ┌────── day of week (0 - 7) (Sunday is 0 or 7)
│ │ │ │ │ │
`Examples:
`bash
Run backup every day at 2:00 AM
bun run bu backup -k -d ./backup -s "0 2 " "src//.ts"Run backup every hour
bun run bu backup -k -d ./backup -s "0 " "src/*/.ts"Run backup every Monday at 3:00 AM
bun run bu backup -k -d ./backup -s "0 3 1" "src/*/.ts"Run backup every 30 minutes
bun run bu backup -k -d ./backup -s "/30 " "src//.ts"
`When a schedule is active, the script will run continuously. Press
Ctrl+C to stop the scheduled backups.$3
The sliding backup window feature automatically keeps only the specified number of backups per destination, deleting older backups. This helps manage disk space while maintaining a history of recent backups.
Important: The sliding backup window requires timestamps to be enabled (cannot use with
-t none). The cleanup happens automatically after each backup is created.Examples:
`bash
Keep only the last 5 backups
bun run bu backup -k -d ./backup -n 5 "src/*/.ts"Keep last 10 backups with Unix timestamp format
bun run bu backup -k -d ./backup -t unix -n 10 "src/*/.ts"Sliding window works per destination
bun run bu backup -k -d ./backup1 -d ./backup2 -n 5 "src/*/.ts"
Each destination (backup1 and backup2) will keep its own 5 backups
Sliding window with S3 destinations
bun run bu backup -k -d s3://my-bucket/backups -n 7 "src/*/.ts"
`The cleanup process:
1. Lists all backup files matching the pattern
backup_*.tar.[compression-alg].crypt in each destination
2. Sorts them by filename (newest first, based on timestamp in filename)
3. Keeps the first N files (where N is the count specified)
4. Deletes the remaining older backups and their checksum filesNote: The sliding window feature works independently for each destination, so if you backup to multiple locations, each will maintain its own set of backups.
Manual Recovery
In case you need to recover a backup without the tool (e.g., the tool is unavailable), you can manually extract backups using standard command-line tools.
$3
The backup file has the following structure:
- First 16 bytes: Initialization Vector (IV/nonce) for AES-CTR encryption
- Remaining bytes: Encrypted, compressed tar archive
Additionally, a checksum file (
.sha256) is created alongside each backup file (unless --no-checksum is used or BACKUP_CHECKSUM=false). The checksum is computed as HMAC-SHA256(key, backup_file_content), where the encryption key is used to generate a keyed hash of the backup file content (including IV and encrypted data). This HMAC-based approach prevents manipulation without knowledge of the encryption key, allowing you to verify the integrity of the backup file without needing to decrypt it.$3
#### 1. Extract IV
`bash
BACKUP_FILE="backup/backup_2026-1-3T11-46-21.tar.zstd.crypt"
KEY=""Extract IV (first 16 bytes)
dd if="$BACKUP_FILE" of=iv.bin bs=1 count=16Extract encrypted data (everything after first 16 bytes)
FILE_SIZE=$(stat -f%z "$BACKUP_FILE")
ENCRYPTED_SIZE=$((FILE_SIZE - 16))
dd if="$BACKUP_FILE" of=encrypted.bin bs=1 skip=16 count=$ENCRYPTED_SIZE
`#### 2. Decrypt the Data
Decrypt using OpenSSL:
`bash
Convert hex key to binary
echo -n "$KEY" | xxd -r -p > key.binDecrypt (AES-256-CTR)
openssl enc -d -aes-256-ctr \
-iv $(xxd -p -c 256 iv.bin | tr -d '\n') \
-K $(xxd -p -c 256 key.bin | tr -d '\n') \
-in encrypted.bin \
-out compressed.tar
`Note: For AES-128-CTR or AES-192-CTR, replace
aes-256-ctr with aes-128-ctr or aes-192-ctr respectively, and adjust key size (16 bytes for AES-128, 24 bytes for AES-192).#### 3. Decompress the Archive
Decompress based on the compression algorithm used:
For zstd (default):
`bash
zstd -d compressed.tar -o archive.tar
`For gzip:
`bash
mv compressed.tar archive.tar.gz
gunzip archive.tar.gz
`For brotli:
`bash
mv compressed.tar compressed.br
brotli -d compressed.br -o archive.tar
`For deflate:
`bash
Deflate can be decompressed using zlib-flate (part of qpdf package) or other tools
On macOS with Homebrew: brew install qpdf
zlib-flate -uncompress < compressed.tar > archive.tarAlternative: Use openssl zlib (if available)
openssl zlib -d -in compressed.tar -out archive.tar
`#### 4. Extract the Tar Archive
`bash
mkdir -p restored
tar -xf archive.tar -C restored
`#### 5. Verify Checksum (Optional)
If a
.sha256 checksum file exists alongside the backup, you can verify the integrity of the backup file. The checksum uses HMAC-SHA256 with the encryption key, so you need the key to verify it:`bash
Compute HMAC-SHA256 of the backup file using the key (key is in hex format)
openssl dgst -sha256 -mac HMAC -macopt "hexkey:$KEY" "$BACKUP_FILE" | cut -d' ' -f2Compare with the checksum file
cat "${BACKUP_FILE}.sha256" | tr -d '\n'
`The hashes should match. If they don't, the backup file may be corrupted or the key may be incorrect.
Note: The checksum is computed as
HMAC-SHA256(key, backup_file_content), where the key is used to generate a keyed hash of the backup file content (including IV and encrypted data). This prevents manipulation without knowledge of the encryption key.$3
Here's a complete bash script for manual recovery (assumes zstd compression and AES-256-CTR):
manual-recovery.sh
S3 Configuration
When using S3 destinations, configure credentials via environment variables:
`bash
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_REGION="us-east-1" # Optional, for AWS S3
export AWS_ENDPOINT="https://s3.us-east-1.amazonaws.com" # Optional, for S3-compatible services
`Note: For S3-compatible services (like Cloudflare R2, DigitalOcean Spaces, MinIO), you may need to set
AWS_ENDPOINT to the service's endpoint URL. Bun's S3 API works with any S3-compatible storage service.Project Structure
-
src/cli.ts - Command-line interface
- src/backup.ts - Backup functionality
- src/restore.ts - Restore functionality
- src/common.ts - Shared constants and types
- src/generateKey.ts - Key generation utility
- src/cleanup.ts - Sliding backup window cleanup functionalityDevelopment
`bash
Run tests
bun testBuild standalone executable
bun run buildFormat code
bun run format
``