Storage manager supporting AWS S3, Cloudflare R2, and Backblaze B2 with automatic provider detection
npm install manage-storage

Universal cloud storage manager supporting Amazon S3, Cloudflare R2, and Backblaze B2 with automatic provider detection. Built on the official AWS SDK v3 with optimized configuration for multi-cloud compatibility.
- Multi-Cloud Support: Works seamlessly with Amazon S3, Cloudflare R2, and Backblaze B2
- Auto-Detection: Automatically detects the configured provider from environment variables
- Modern SDK: Built on AWS SDK v3 with command pattern for optimal performance
- Simple API: Single function interface for all storage operations
- No File System: Returns data directly - perfect for serverless/edge environments
- Minified: Terser minification for smaller bundle sizes
``bash`
npm install manage-storage
`bash`
bun i manage-storage
`javascript
import { manageStorage } from "manage-storage";
// Upload a file
await manageStorage("upload", {
key: "documents/report.pdf",
body: fileContent,
});
// Download a file
const data = await manageStorage("download", {
key: "documents/report.pdf",
});
// List all files
const files = await manageStorage("list");
// Copy a file
await manageStorage("copy", {
key: "documents/report.pdf",
destinationKey: "documents/report-backup.pdf",
});
// Rename a file (copy + delete)
await manageStorage("rename", {
key: "documents/old-name.pdf",
destinationKey: "documents/new-name.pdf",
});
// Delete a file
await manageStorage("delete", {
key: "documents/report.pdf",
});
`
Set environment variables for your preferred provider. The library will automatically detect which provider to use.
`env`
CLOUDFLARE_BUCKET_NAME=my-bucket
CLOUDFLARE_ACCESS_KEY_ID=your-access-key-id
CLOUDFLARE_SECRET_ACCESS_KEY=your-secret-access-key
CLOUDFLARE_BUCKET_URL=https://your-account-id.r2.cloudflarestorage.com
`env`
BACKBLAZE_BUCKET_NAME=my-bucket
BACKBLAZE_ACCESS_KEY_ID=your-key-id
BACKBLAZE_SECRET_ACCESS_KEY=your-application-key
BACKBLAZE_BUCKET_URL=https://s3.us-west-004.backblazeb2.com
`env`
AMAZON_BUCKET_NAME=my-bucket
AMAZON_ACCESS_KEY_ID=your-access-key-id
AMAZON_SECRET_ACCESS_KEY=your-secret-access-key
AMAZON_BUCKET_URL=https://s3.amazonaws.com
AMAZON_REGION=us-east-1
Performs storage operations on your configured cloud provider.
#### Parameters
- action string - The operation to perform: 'upload', 'download', 'delete', 'list', 'deleteAll', 'copy', or 'rename'object
- options - Operation-specific options
#### Options
| Option | Type | Required | Description |
| ---------------- | ----------------------------------- | ----------------------------------- | ---------------------------------------------------- |
| key | string | Yes (except for list/deleteAll) | The object key/path |destinationKey
| | string | Yes (for copy/rename) | The destination key/path for copy/rename operations |body
| | string\|Buffer\|Stream | Yes (for upload) | The file content to upload |provider
| | 'amazon'\|'cloudflare'\|'backblaze' | No | Force a specific provider (auto-detected if omitted) |
`javascript
// Upload text content
await manageStorage("upload", {
key: "notes/memo.txt",
body: "Hello, World!",
});
// Upload Buffer
const buffer = Buffer.from("File contents");
await manageStorage("upload", {
key: "data/file.bin",
body: buffer,
});
// Upload JSON
await manageStorage("upload", {
key: "config/settings.json",
body: JSON.stringify({ theme: "dark", lang: "en" }),
});
`
`javascript
// Download and get the raw data
const data = await manageStorage("download", {
key: "notes/memo.txt",
});
console.log(data); // "Hello, World!"
// Download JSON and parse
const configData = await manageStorage("download", {
key: "config/settings.json",
});
const config = JSON.parse(configData);
console.log(config.theme); // "dark"
`
`javascript
// List all files in the bucket
const files = await manageStorage("list");
console.log(files);
// Output: ['notes/memo.txt', 'data/file.bin', 'config/settings.json']
// Filter by prefix (folder)
const notes = files.filter((key) => key.startsWith("notes/"));
console.log(notes); // ['notes/memo.txt']
`
`javascript
// Copy a file to a new location
await manageStorage("copy", {
key: "documents/report.pdf",
destinationKey: "documents/backup/report-2024.pdf",
});
// Create a backup
await manageStorage("copy", {
key: "config/settings.json",
destinationKey: "config/settings.backup.json",
});
`
`javascript
// Rename a file (performs copy + delete)
await manageStorage("rename", {
key: "old-filename.txt",
destinationKey: "new-filename.txt",
});
// Move to a different folder
await manageStorage("rename", {
key: "temp/draft.md",
destinationKey: "published/article.md",
});
`
`javascript
// Delete a single file
await manageStorage("delete", {
key: "notes/memo.txt",
});
// Delete all files in the bucket (use with caution!)
const result = await manageStorage("deleteAll");
console.log(Deleted ${result.count} files);`
`javascript
// Use Cloudflare R2 even if other providers are configured
await manageStorage("upload", {
key: "test.txt",
body: "Hello Cloudflare!",
provider: "cloudflare",
});
// Use Backblaze B2 specifically
await manageStorage("upload", {
key: "test.txt",
body: "Hello Backblaze!",
provider: "backblaze",
});
`
`javascript`
// Pass credentials at runtime instead of using env vars
await manageStorage("upload", {
key: "secure/data.json",
body: JSON.stringify({ secret: "value" }),
provider: "cloudflare",
BUCKET_NAME: "my-custom-bucket",
ACCESS_KEY_ID: "runtime-key-id",
SECRET_ACCESS_KEY: "runtime-secret",
BUCKET_URL: "https://custom-account.r2.cloudflarestorage.com",
});
`javascript
// app/api/upload/route.js
import { manageStorage } from "manage-storage";
export async function POST(req) {
const { fileName, fileContent } = await req.json();
const result = await manageStorage("upload", {
key: uploads/${fileName},
body: fileContent,
});
return Response.json(result);
}
export async function GET(req) {
const { searchParams } = new URL(req.url);
const fileName = searchParams.get("file");
const data = await manageStorage("download", {
key: uploads/${fileName},
});
return new Response(data, {
headers: {
"Content-Type": "application/octet-stream",
"Content-Disposition": attachment; filename="${fileName}",`
},
});
}
`javascript
import express from "express";
import { manageStorage } from "manage-storage";
const app = express();
app.use(express.json());
app.post("/api/files", async (req, res) => {
try {
const { key, content } = req.body;
const result = await manageStorage("upload", { key, body: content });
res.json(result);
} catch (error) {
res.status(500).json({ error: error.message });
}
});
app.get("/api/files", async (req, res) => {
try {
const files = await manageStorage("list");
res.json({ files });
} catch (error) {
res.status(500).json({ error: error.message });
}
});
app.get("/api/files/:key", async (req, res) => {
try {
const data = await manageStorage("download", { key: req.params.key });
res.send(data);
} catch (error) {
res.status(500).json({ error: error.message });
}
});
app.delete("/api/files/:key", async (req, res) => {
try {
const result = await manageStorage("delete", { key: req.params.key });
res.json(result);
} catch (error) {
res.status(500).json({ error: error.message });
}
});
app.listen(3000, () => console.log("Server running on port 3000"));
`
`javascript
import { manageStorage } from "manage-storage";
export default {
async fetch(request, env) {
const url = new URL(request.url);
if (request.method === "POST" && url.pathname === "/upload") {
const { key, content } = await request.json();
const result = await manageStorage("upload", {
key,
body: content,
provider: "cloudflare",
BUCKET_NAME: env.CLOUDFLARE_BUCKET_NAME,
ACCESS_KEY_ID: env.CLOUDFLARE_ACCESS_KEY_ID,
SECRET_ACCESS_KEY: env.CLOUDFLARE_SECRET_ACCESS_KEY,
BUCKET_URL: env.CLOUDFLARE_BUCKET_URL,
});
return Response.json(result);
}
return new Response("Not found", { status: 404 });
},
};
`
`javascript
// Upload multiple files
const files = [
{ key: "docs/file1.txt", content: "Content 1" },
{ key: "docs/file2.txt", content: "Content 2" },
{ key: "docs/file3.txt", content: "Content 3" },
];
await Promise.all(
files.map((file) =>
manageStorage("upload", { key: file.key, body: file.content })
)
);
// Download multiple files
const keys = ["docs/file1.txt", "docs/file2.txt", "docs/file3.txt"];
const contents = await Promise.all(
keys.map((key) => manageStorage("download", { key }))
);
`
`javascript`
{
success: true,
key: 'path/to/file.txt',
// ... additional provider-specific metadata
}
`javascript`
// Returns the file content as a string
"File contents here...";
`javascript`
{
success: true,
key: 'path/to/file.txt'
}
`javascript`
["folder/file1.txt", "folder/file2.txt", "another/file3.json"];
`javascript`
{
success: true,
count: 42
}
`javascript`
{
success: true,
sourceKey: 'documents/report.pdf',
destinationKey: 'documents/backup/report-2024.pdf'
}
`javascript``
{
success: true,
oldKey: 'old-filename.txt',
newKey: 'new-filename.txt'
}
This library uses the official @aws-sdk/client-s3 because:
- Modern Architecture: Modular SDK with tree-shakable imports
- Command Pattern: Clean, consistent API design
- S3-Compatible: Works with S3, R2, B2, and any S3-compatible service
- Official Support: Direct support from AWS with regular updates
- Production Ready: Battle-tested in enterprise environments
- Minified Output: Terser minification reduces bundle size
| Service | Storage price (/TB-month) | Egress to internet | API ops (Class A/B per 1K, approx) | Minimum duration | Notes |
| --------------------------------------------------------------- | ------------------------------ | ----------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- | ----------------------------------------------------------- | ------------------------------------------------------------------ |
| Backblaze B2 | $6 | Free up to 3x stored/mo, then $0.01/GB | Free quotas; then ~$0.004/10K B | None | Lowest storage; generous egress. |
| Cloudflare R2 | $15 | Zero | ~$4.50/M A; $0.36/M B | None | No bandwidth bills. |
| AWS S3 Standard | $23 | Tiered ~$0.09/GB first 10TB | ~$5/M A; $0.4/M B | None | Ecosystem premium. |
| Google GCS Standard | $20-26 (region/dual/multi) | Tiered ~$0.08-0.12/GB worldwide | $5/1K A; $0.4/1K B (Standard) | None (Standard) | Multi-region ~$26; cheaper classes available (Nearline $10, etc.). |
| Aspect | Backblaze B2 | Cloudflare R2 | AWS S3 | Google GCS |
| -------------------- | --------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- | --------------------------------------------------------------------------------- |
| Ecosystem | Standalone; partners (Fastly, Vultr) | Cloudflare Workers/CDN/Zero Trust | Full AWS (Lambda, EC2, Athena) | Full GCP (GKE, BigQuery, AI/ML) |
| Storage classes | Single hot | Single | Many (IA, Glacier, Intelligent) | Standard, Nearline, Coldline, Archive |
| S3 compatibility | Strong | Excellent (99% ops) | Native | Strong |
| Lifecycle mgmt | Basic rules | Basic expiration | Advanced | Advanced, Autoclass |
| Object Lock | Yes (compliance/gov) | Limited | Yes | Yes (via retention) |
| Free tier | First 10GB | 10GB storage, 1M Class A/mo | Limited | 5GB-months Standard |
- Backblaze B2: Cheapest for bulk/hot storage with moderate egress (backups, media archives); simple, no vendor lock-in.
- Cloudflare R2: Public-facing assets/images/APIs with high traffic; zero egress saves big on web delivery.
- AWS S3: AWS-centric apps needing advanced analytics, replication, compliance; pay for features/ecosystem.
- Google GCS: GCP workloads (BigQuery, AI, Kubernetes); multi-region needs or tiered classes for cost optimization.
Backblaze wins on raw storage cost, R2 on bandwidth-heavy apps, while AWS/GCS suit enterprise ecosystems with richer tools. For exact costs, use calculators with your workload (e.g., TB stored, TB egress, ops volume).