One API for all cloud storage: AWS S3, Google Cloud, Azure & local disk. Secure Express.js file uploads with presigned URLs, validation, and zero-config provider switching. Built-in path traversal prevention, file validation, and automatic filename saniti
npm install express-storageSecure, unified file uploads for Express.js — one API for all cloud providers.
Stop writing separate upload code for every storage provider. Express Storage gives you a single, secure interface that works with AWS S3, Google Cloud Storage, Azure Blob Storage, and local disk. Switch providers by changing one environment variable. No code changes required.







---
Every application needs file uploads. And every application gets it wrong at first.
You start with local storage, then realize you need S3 for production. You copy-paste upload code from Stack Overflow, then discover it's vulnerable to path traversal attacks. You build presigned URL support, then learn Azure handles it completely differently than AWS.
Express Storage solves these problems once, so you don't have to.
- One API, Four Providers — Write upload code once. Deploy to any cloud.
- Security Built In — Path traversal prevention, filename sanitization, file validation, and null byte protection come standard.
- Presigned URLs Done Right — Client-side uploads that bypass your server, with proper validation for each provider's quirks.
- TypeScript Native — Full type safety with intelligent autocomplete. No any types hiding bugs.
- Zero Config Switching — Change FILE_DRIVER=local to FILE_DRIVER=s3 and you're done.
---
``bash`
npm install express-storage
`typescript
import express from "express";
import multer from "multer";
import { StorageManager } from "express-storage";
const app = express();
const upload = multer();
const storage = new StorageManager();
app.post("/upload", upload.single("file"), async (req, res) => {
const result = await storage.uploadFile(req.file, {
maxSize: 10 1024 1024, // 10MB limit
allowedMimeTypes: ["image/jpeg", "image/png", "application/pdf"],
});
if (result.success) {
res.json({ url: result.fileUrl });
} else {
res.status(400).json({ error: result.error });
}
});
`
Create a .env file:
`envChoose your storage provider
FILE_DRIVER=local
That's it. Your upload code stays the same regardless of which provider you choose.
---
Supported Storage Providers
| Provider | Direct Upload | Presigned URLs | Best For |
| ---------------- | ------------- | ----------------- | ------------------------- |
| Local Disk |
local | — | Development, small apps |
| AWS S3 | s3 | s3-presigned | Most production apps |
| Google Cloud | gcs | gcs-presigned | GCP-hosted applications |
| Azure Blob | azure | azure-presigned | Azure-hosted applications |---
Security Features
File uploads are one of the most exploited attack vectors in web applications. Express Storage protects you by default.
$3
Attackers try filenames like
../../../etc/passwd to escape your upload directory. We block this:`typescript
// These malicious filenames are automatically rejected
"../secret.txt"; // Blocked: path traversal
"..\\config.json"; // Blocked: Windows path traversal
"file\0.txt"; // Blocked: null byte injection
`$3
User-provided filenames can't be trusted. We transform them into safe, unique identifiers:
`
User uploads: "My Photo (1).jpg"
Stored as: "1706123456789_a1b2c3d4e5_my_photo_1_.jpg"
`The format
{timestamp}_{random}_{sanitized_name} prevents collisions and removes dangerous characters.$3
Validate before processing. Reject before storing.
`typescript
await storage.uploadFile(file, {
maxSize: 5 1024 1024, // 5MB limit
allowedMimeTypes: ["image/jpeg", "image/png"],
allowedExtensions: [".jpg", ".png"],
});
`$3
For S3 and GCS, file constraints are enforced at the URL level — clients physically cannot upload the wrong file type or size. For Azure (which doesn't support URL-level constraints), we validate after upload and automatically delete invalid files.
---
Presigned URLs: Client-Side Uploads
Large files shouldn't flow through your server. Presigned URLs let clients upload directly to cloud storage.
$3
`
1. Client → Your Server: "I want to upload photo.jpg (2MB, image/jpeg)"
2. Your Server → Client: "Here's a presigned URL, valid for 10 minutes"
3. Client → Cloud Storage: Uploads directly (your server never touches the bytes)
4. Client → Your Server: "Upload complete, please verify"
5. Your Server: Confirms file exists, returns permanent URL
`$3
`typescript
// Step 1: Generate upload URL
app.post("/upload/init", async (req, res) => {
const { fileName, contentType, fileSize } = req.body; const result = await storage.generateUploadUrl(
fileName,
contentType,
fileSize,
"user-uploads", // Optional folder
);
res.json({
uploadUrl: result.uploadUrl,
reference: result.reference, // Save this for later
});
});
// Step 2: Confirm upload
app.post("/upload/confirm", async (req, res) => {
const { reference, expectedContentType, expectedFileSize } = req.body;
const result = await storage.validateAndConfirmUpload(reference, {
expectedContentType,
expectedFileSize,
});
if (result.success) {
res.json({ viewUrl: result.viewUrl });
} else {
res.status(400).json({ error: result.error });
}
});
`$3
| Provider | Content-Type Enforced | File Size Enforced | Post-Upload Validation |
| -------- | --------------------- | ------------------ | ---------------------- |
| S3 | At URL level | At URL level | Optional |
| GCS | At URL level | At URL level | Optional |
| Azure | Not enforced | Not enforced | Required |
For Azure, always call
validateAndConfirmUpload() with expected values. Invalid files are automatically deleted.---
Large File Uploads
For files larger than 100MB, we recommend using presigned URLs instead of direct server uploads. Here's why:
$3
When you upload through your server, the entire file must be buffered in memory (or stored temporarily on disk). For a 500MB video file, that's 500MB of RAM per concurrent upload. With presigned URLs, the file goes directly to cloud storage — your server only handles small JSON requests.
$3
For files that must go through your server, Express Storage automatically uses streaming uploads for files larger than 100MB:
- S3: Uses multipart upload with 10MB chunks
- GCS: Uses resumable uploads with streaming
- Azure: Uses block upload with streaming
This happens transparently — you don't need to change your code.
$3
`typescript
// Frontend: Request presigned URL
const { uploadUrl, reference } = await fetch("/api/upload/init", {
method: "POST",
body: JSON.stringify({
fileName: "large-video.mp4",
contentType: "video/mp4",
fileSize: 524288000, // 500MB
}),
}).then((r) => r.json());// Frontend: Upload directly to cloud (bypasses your server!)
await fetch(uploadUrl, {
method: "PUT",
body: file,
headers: { "Content-Type": "video/mp4" },
});
// Frontend: Confirm upload
await fetch("/api/upload/confirm", {
method: "POST",
body: JSON.stringify({ reference }),
});
`$3
| Scenario | Recommended Limit | Reason |
| ------------------------------ | ----------------- | ------------------------------ |
| Direct upload (memory storage) | < 100MB | Node.js memory constraints |
| Direct upload (disk storage) | < 500MB | Temp file management |
| Presigned URL upload | 5GB+ | Limited only by cloud provider |
---
API Reference
$3
The main class you'll interact with.
`typescript
import { StorageManager } from "express-storage";// Use environment variables
const storage = new StorageManager();
// Or configure programmatically
const storage = new StorageManager({
driver: "s3",
credentials: {
bucketName: "my-bucket",
awsRegion: "us-east-1",
maxFileSize: 50 1024 1024, // 50MB
},
logger: console, // Optional: enable debug logging
});
`$3
`typescript
// Single file
const result = await storage.uploadFile(file, validation?, options?);// Multiple files (processed in parallel with concurrency limits)
const results = await storage.uploadFiles(files, validation?, options?);
// Generic upload (auto-detects single vs multiple)
const result = await storage.upload(input, validation?, options?);
`$3
`typescript
// Generate upload URL with constraints
const result = await storage.generateUploadUrl(fileName, contentType?, fileSize?, folder?);// Generate view URL for existing file
const result = await storage.generateViewUrl(reference);
// Validate upload (required for Azure, recommended for all)
const result = await storage.validateAndConfirmUpload(reference, options?);
// Batch operations
const results = await storage.generateUploadUrls(files, folder?);
const results = await storage.generateViewUrls(references);
`$3
`typescript
// Delete single file
const success = await storage.deleteFile(reference);// Delete multiple files
const results = await storage.deleteFiles(references);
// List files with pagination
const result = await storage.listFiles(prefix?, maxResults?, continuationToken?);
`$3
`typescript
interface UploadOptions {
contentType?: string; // Override detected type
metadata?: Record; // Custom metadata
cacheControl?: string; // e.g., 'max-age=31536000'
contentDisposition?: string; // e.g., 'attachment; filename="doc.pdf"'
}// Example: Upload with caching headers
await storage.uploadFile(file, undefined, {
cacheControl: "public, max-age=31536000",
metadata: { uploadedBy: "user-123" },
});
`$3
`typescript
interface FileValidationOptions {
maxSize?: number; // Maximum file size in bytes
allowedMimeTypes?: string[]; // e.g., ['image/jpeg', 'image/png']
allowedExtensions?: string[]; // e.g., ['.jpg', '.png']
}
`---
Environment Variables
$3
| Variable | Description | Default |
| ---------------------- | ----------------------------------- | ------------------------ |
|
FILE_DRIVER | Storage driver to use | local |
| BUCKET_NAME | Cloud storage bucket/container name | — |
| BUCKET_PATH | Default folder path within bucket | "" (root) |
| LOCAL_PATH | Directory for local storage | public/express-storage |
| PRESIGNED_URL_EXPIRY | URL validity in seconds | 600 (10 min) |
| MAX_FILE_SIZE | Maximum upload size in bytes | 5368709120 (5GB) |$3
| Variable | Description |
| ---------------- | ----------------------------------------------- |
|
AWS_REGION | AWS region (e.g., us-east-1) |
| AWS_ACCESS_KEY | Access key ID (optional if using IAM roles) |
| AWS_SECRET_KEY | Secret access key (optional if using IAM roles) |$3
| Variable | Description |
| ----------------- | ------------------------------------------------ |
|
GCS_PROJECT_ID | Google Cloud project ID |
| GCS_CREDENTIALS | Path to service account JSON (optional with ADC) |$3
| Variable | Description |
| ------------------------- | ------------------------------------------------- |
|
AZURE_CONNECTION_STRING | Full connection string (recommended) |
| AZURE_ACCOUNT_NAME | Storage account name (alternative) |
| AZURE_ACCOUNT_KEY | Storage account key (alternative) |Note: Azure uses
BUCKET_NAME for the container name (same as S3/GCS).---
Utilities
Express Storage includes battle-tested utilities you can use directly.
$3
`typescript
import { withRetry } from "express-storage";const result = await withRetry(() => storage.uploadFile(file), {
maxAttempts: 3,
baseDelay: 1000,
maxDelay: 10000,
exponentialBackoff: true,
});
`$3
`typescript
import {
isImageFile,
isDocumentFile,
getFileExtension,
formatFileSize,
} from "express-storage";isImageFile("image/jpeg"); // true
isDocumentFile("application/pdf"); // true
getFileExtension("photo.jpg"); // '.jpg'
formatFileSize(1048576); // '1 MB'
`$3
`typescript
import { StorageManager, Logger } from "express-storage";const logger: Logger = {
debug: (msg, ...args) => console.debug(
[Storage] ${msg}, ...args),
info: (msg, ...args) => console.info([Storage] ${msg}, ...args),
warn: (msg, ...args) => console.warn([Storage] ${msg}, ...args),
error: (msg, ...args) => console.error([Storage] ${msg}, ...args),
};const storage = new StorageManager({ driver: "s3", logger });
`---
Real-World Examples
$3
`typescript
app.post("/users/:id/avatar", upload.single("avatar"), async (req, res) => {
const result = await storage.uploadFile(
req.file,
{
maxSize: 2 1024 1024, // 2MB
allowedMimeTypes: ["image/jpeg", "image/png", "image/webp"],
},
{
cacheControl: "public, max-age=86400",
metadata: { userId: req.params.id },
},
); if (result.success) {
await db.users.update(req.params.id, { avatarUrl: result.fileUrl });
res.json({ avatarUrl: result.fileUrl });
} else {
res.status(400).json({ error: result.error });
}
});
`$3
`typescript
// Frontend requests upload URL
app.post("/documents/request-upload", async (req, res) => {
const { fileName, fileSize } = req.body; const result = await storage.generateUploadUrl(
fileName,
"application/pdf",
fileSize,
documents/${req.user.id},
); // Store pending upload in database
await db.documents.create({
reference: result.reference,
userId: req.user.id,
status: "pending",
});
res.json({
uploadUrl: result.uploadUrl,
reference: result.reference,
});
});
// Frontend confirms upload complete
app.post("/documents/confirm-upload", async (req, res) => {
const { reference } = req.body;
const result = await storage.validateAndConfirmUpload(reference, {
expectedContentType: "application/pdf",
});
if (result.success) {
await db.documents.update(
{ reference },
{
status: "uploaded",
size: result.actualFileSize,
},
);
res.json({ success: true, viewUrl: result.viewUrl });
} else {
await db.documents.delete({ reference });
res.status(400).json({ error: result.error });
}
});
`$3
`typescript
app.post("/gallery/upload", upload.array("photos", 20), async (req, res) => {
const files = req.files as Express.Multer.File[]; const results = await storage.uploadFiles(files, {
maxSize: 10 1024 1024,
allowedMimeTypes: ["image/jpeg", "image/png"],
});
const successful = results.filter((r) => r.success);
const failed = results.filter((r) => !r.success);
res.json({
uploaded: successful.length,
failed: failed.length,
files: successful.map((r) => ({
fileName: r.fileName,
url: r.fileUrl,
})),
errors: failed.map((r) => r.error),
});
});
`---
Migrating Between Providers
Moving from local development to cloud production? Or switching cloud providers? Here's how.
$3
`env
Before (development)
FILE_DRIVER=local
LOCAL_PATH=uploadsAfter (production)
FILE_DRIVER=s3
BUCKET_NAME=my-app-uploads
AWS_REGION=us-east-1
`Your code stays exactly the same. Files uploaded before migration remain in their original location — you'll need to migrate existing files separately if needed.
$3
`env
Before
FILE_DRIVER=s3
BUCKET_NAME=my-bucket
AWS_REGION=us-east-1After
FILE_DRIVER=azure
BUCKET_NAME=my-container
AZURE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=...
`Important: If using presigned URLs, remember that Azure requires post-upload validation. Add
validateAndConfirmUpload() calls to your confirmation endpoints.---
TypeScript Support
Express Storage is written in TypeScript and exports all types:
`typescript
import {
StorageManager,
StorageDriver,
FileUploadResult,
PresignedUrlResult,
FileValidationOptions,
UploadOptions,
Logger,
} from "express-storage";// Full autocomplete and type checking
const result: FileUploadResult = await storage.uploadFile(file);
if (result.success) {
console.log(result.fileName); // TypeScript knows this exists
console.log(result.fileUrl); // TypeScript knows this exists
}
`---
Contributing
Contributions are welcome! Please read our contributing guidelines before submitting a pull request.
`bash
Clone the repository
git clone https://github.com/th3hero/express-storage.gitInstall dependencies
npm installRun in development mode
npm run devBuild for production
npm run buildRun linting
npm run lint
``---
MIT License — use it however you want.
---
- Issues: GitHub Issues
- Author: Alok Kumar (@th3hero)
---
Made for developers who are tired of writing upload code from scratch.