A unified abstraction layer for AWS S3, Google Cloud Storage, and Azure Blob Storage with upload, download, delete, and signed URL support.
npm install unified-cloud-storage
import { CloudStorageFactory, CloudProvider } from "unified-cloud-storage";
const cloudStorage = CloudStorageFactory.create({
provider: CloudProvider.GCP,
gcp: {
baseBucket: "my-gcs-bucket",
serviceAccount: '{ / service account JSON / }',
},
});
`
$3
`
{
provider: CloudProvider.GCP,
gcp: {
baseBucket: process.env.GCS_BASE_BUCKET!,
serviceAccount: JSON.parse(process.env.GCS_SERVICE_ACCOUNT_KEY!),
httpProxy: process.env.HTTP_PROXY, // Optional
prefix: process.env.GCS_PREFIX ?? undefined, // Optional
cacheProfile: process.env.CACHE_PROFILE as CacheProfile, // Optional
signedUrlTTL: process.env.SIGNED_URL_TTL // Optional
? Number(process.env.SIGNED_URL_TTL)
: undefined,
gcpCdnDomain: process.env.GCP_CDN_DOMAIN, // e.g. cdn.example.com // Optional
}
}
`
$3
$3
Type: string
Description:
Name of the Google Cloud Storage bucket where files will be uploaded.
$3
Type: object | string
Description:
Full GCP Service Account JSON, stored as a string in the environment variable.
GCP service account credentials with access to the bucket.
Important:
- Do NOT split fields
- Do NOT remove any keys
- Store the entire JSON exactly as downloaded
$3
`
GCS_SERVICE_ACCOUNT_KEY='{
"type": "service_account",
"project_id": "my-gcp-project",
"private_key_id": "abc123",
"private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
"client_email": "my-service@my-gcp-project.iam.gserviceaccount.com",
"client_id": "123456789",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/...",
"universe_domain": "googleapis.com"
}'
`
$3
$3
Type: string
Description:
HTTP proxy URL, if outbound traffic must go through a proxy.
$3
Type: string
Description:
Optional prefix automatically prepended to all object keys.
Useful for environment-based isolation (prod/, dev/, etc.)
$3
Type: CacheProfile (enum from this package)
Description:
Controls Cache-Control headers applied to uploaded objects when a CDN is used.
Available values:
CacheProfile.LONG_LIVED_CACHE
CacheProfile.SHORT_LIVED_CACHE
| Profile | Cache-Control Header | Use case |
| ------------------- | ------------------------------------- | -------------------------------------------- |
| LONG_LIVED_CACHE | public, max-age=31536000, immutable | Static assets, media |
| SHORT_LIVED_CACHE | private, max-age=600 | User-specific or frequently changing content |
$3
Type: number (seconds)
Description:
Default expiration time for GCS signed URLs.
Can be overridden per request.
$3
$3
Type: string
Description:
Custom domain backed by Google Cloud CDN (HTTPS Load Balancer + backend bucket).
Example:
cdn.example.com
$3
| Access Type | Returned URL |
| ----------------------- | ----------------------------------------------- |
| Public + CDN configured | https://cdn.example.com/object |
| Public (no CDN) | https://storage.googleapis.com/bucket/object |
| Signed | GCS signed URL (Cloud CDN caches it if enabled) |
When provided:
Public URLs are returned using the CDN domain
Signed URLs are still generated by GCS and cached by Cloud CDN
2. Amazon Web Services (AWS) S3 Provider
To use Amazon S3 with CloudStorageFactory, set the provider to AWS and supply the following configuration.
$3
`
import { CloudStorageFactory, CloudProvider } from "unified-cloud-storage";
const cloudStorage = CloudStorageFactory.create({
provider: CloudProvider.AWS,
aws: {
bucket: "my-s3-bucket",
region: "ap-south-1",
accessKeyId: "AKIA...",
secretAccessKey: "",
},
});
`
$3
cloudFrontDomain
`
{
provider: CloudProvider.AWS,
aws: {
bucket: process.env.AWS_S3_BUCKET!,
region: process.env.AWS_REGION!,
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
httpProxy: process.env.HTTP_PROXY, // Optional
prefix: process.env.AWS_S3_PREFIX ?? undefined, // Optional
cacheProfile: process.env.CACHE_PROFILE as CacheProfile, // Optional
signedUrlTTL: process.env.SIGNED_URL_TTL // Optional
? Number(process.env.SIGNED_URL_TTL)
: undefined,
cloudFrontDomain: process.env.CLOUDFRONT_DOMAIN, // Optional
cloudFrontKeyPairId: process.env.CLOUDFRONT_KEY_PAIR_ID, // Optional
cloudFrontPrivateKey: process.env.CLOUDFRONT_PRIVATE_KEY, // Optional
},
}
`
$3
$3
Type: string
Description:
Name of the Amazon S3 bucket where files will be uploaded.
$3
Type: string
Description:
AWS region where the S3 bucket is hosted.
Example: ap-south-1
$3
Type: string
Description:
AWS IAM access key with permission to access the S3 bucket.
$3
Type: string
Description:
AWS IAM secret key corresponding to the access key.
$3
$3
Type: string
Description:
HTTP proxy URL, if outbound traffic must go through a proxy.
$3
Type: string
Description:
Optional prefix automatically prepended to all S3 object keys.
Useful for environment-based isolation (prod/, dev/, etc.)
$3
Type: CacheProfile (enum from this package)
Description:
Controls Cache-Control headers applied to S3 objects when CloudFront is used.
Available values:
CacheProfile.LONG_LIVED_CACHE
CacheProfile.SHORT_LIVED_CACHE
| Profile | Cache-Control Header | Use case |
| ------------------- | ------------------------------------- | -------------------------------------------- |
| LONG_LIVED_CACHE | public, max-age=31536000, immutable | Static assets, media |
| SHORT_LIVED_CACHE | private, max-age=600 | User-specific or frequently changing content |
$3
Type: number (seconds)
Description:
Default expiration time for AWS S3 signed URLs.
Can be overridden per request.
$3
$3
Type: string
Description:
CloudFront distribution domain pointing to the S3 bucket.
Example:
d3abcd1234xyz.cloudfront.net
To enable CloudFront signed URLs, the following parameters must be provided in addition to cloudFront.distributionDomain.
$3
Type: string
Description:
The CloudFront Key Pair ID associated with a trusted key group or root account.
This key pair is used by CloudFront to verify signed URL requests.
CloudFront uses it to know which public key should verify the signed URL
Example:APKAIATXXXXXXXXXXXXXXXX
$3
Type: string
Description:
The private key corresponding to the CloudFront Key Pair ID.
Used by the application to cryptographically sign CloudFront URLs.
$3
Must be the full RSA private key
Store securely (environment variable or secret manager)
Never commit to source control
Example (environment variable):
`
CLOUDFRONT_PRIVATE_KEY="-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAr...
...
-----END RSA PRIVATE KEY-----"
`
$3
| Access Type | Returned URL |
| --------------------------- | ---------------------------------------------------------------------------------------- |
| Public + CloudFront enabled | https://d3abcd1234xyz.cloudfront.net/object |
| Public (no CloudFront) | https://bucket.s3.region.amazonaws.com/object |
| S3_SIGNED | S3 signed URL (CloudFront caches if configured) |
| CLOUDFRONT_SIGNED | https://d3abcd123.cloudfront.net/path/file.png?Expires=...&Signature=...&Key-Pair-Id=... |
Works for private CloudFront distributions with trusted key groups
3. Microsoft Azure Blob Storage Provider
To use Azure Blob Storage with CloudStorageFactory, set the provider to AZURE and supply the following configuration.
$3
`
import { CloudStorageFactory, CloudProvider } from "unified-cloud-storage";
const cloudStorage = CloudStorageFactory.create({
provider: CloudProvider.AZURE,
azure: {
accountName: "mystorageaccount",
accountKey: "**",
container: "my-container",
},
});
`
$3
cloudFrontDomain
`
{
provider: CloudProvider.AZURE,
azure: {
accountName: process.env.AZURE_STORAGE_ACCOUNT!,
accountKey: process.env.AZURE_STORAGE_KEY!,
container: process.env.AZURE_BLOB_CONTAINER!,
httpProxy: process.env.HTTP_PROXY, // Optional
prefix: process.env.AZURE_BLOB_PREFIX ?? undefined, // Optional
cacheProfile: process.env.CACHE_PROFILE as CacheProfile, // Optional
signedUrlTTL: process.env.SIGNED_URL_TTL // Optional
? Number(process.env.SIGNED_URL_TTL)
: undefined,
azureCdnDomain: process.env.AZURE_CDN_DOMAIN, // Optional
},
}
`
$3
$3
Type: string
Description:
Name of the Azure Storage Account.
Example:mystorageaccount
$3
Type: string
Description:
Access key for the Azure Storage Account.
$3
Generated in Azure Portal → Storage Account → Access keys
Either key1 or key2 can be used
$3
Type: string
Description:
Name of the Blob Storage container where files will be stored.
$3
$3
Type: string
Description:
HTTP proxy URL, if outbound traffic must go through a proxy.
$3
Type: string
Description:
Optional prefix automatically prepended to all S3 object keys.
Useful for environment-based isolation (prod/, dev/, etc.)
$3
Type: CacheProfile (enum from this package)
Description:
Controls Cache-Control headers applied to uploaded blobs when Azure CDN is used.
Available values:
CacheProfile.LONG_LIVED_CACHE
CacheProfile.SHORT_LIVED_CACHE
| Profile | Cache-Control Header | Use case |
| ------------------- | ------------------------------------- | -------------------------------------------- |
| LONG_LIVED_CACHE | public, max-age=31536000, immutable | Static assets, media |
| SHORT_LIVED_CACHE | private, max-age=600 | User-specific or frequently changing content |
$3
Type: number (seconds)
Description:
Default expiration time for Azure SAS URLs.
Can be overridden per request.
$3
$3
Type: string
Description:
Custom domain backed by Azure CDN pointing to the blob container.
Example:cdn.example.com
$3
| AccessType | Returned URL |
| ------------------- | ------------------------------------------------------------------------------ |
| AZURE_BLOB_PUBLIC | https://account.blob.core.windows.net/container/path/file.png |
| AZURE_CDN_PUBLIC | https://cdn.example.com/path/file.png |
| AZURE_BLOB_SIGNED | https://account.blob.core.windows.net/container/path/file.png?sv=...&sig=... |
| AZURE_CDN_SIGNED | https://cdn.example.com/path/file.png?sv=...&sig=... |
NOTE: In Azure, public blob access is controlled at the CONTAINER level.
This means the container must be configured to allow public blob access.
Unlike GCS where individual files can be made public, Azure requires:
Container PublicAccessType = "blob" (anonymous read for blobs only)
OR PublicAccessType = "container" (anonymous read for container & blobs)
We must verify/ensure the container allows public access here.
Performing Storage Operations (Upload, Download and Delete)
1. Single File Upload (Streaming to Cloud Storage)
This module provides stream-based single file upload to cloud storage (AWS S3 / GCS / Azure Blob).
Files are streamed directly, no server storage, works for large files.
Provider: uploadSingleFile()
Uploads one file at a time
Streams directly to cloud storage
Supports public and private files
Returns either public URL or signed URL
Handles CDN URLs (CloudFront / Akamai / Cloud CDN / Azure CDN) if configured
TTL (expiry) can be set for signed URLs
$3
Receives file as a stream
Streams file to cloud storage
Sets visibility (public/private)
Returns a URL you can use to download
Provider API
$3
stream — File stream from request
filename — Original file name
mimeType — File content type
options — Optional:
visibility — "public" or "private" (default "private")
urlAccess — "public" or "signed" (default "signed")
urlTtlSeconds — Signed URL expiry in seconds (default : 3600 ms (1 hr))
Returns
`
{
url: string; // Download URL
key: string; // Path in cloud storage
bucket: string; // Bucket / container name
filename: string; // Original filename
mimeType: string; // Content type
isPublic: boolean; // Public or private
urlType:
| "GCS_PUBLIC"| "GCS_SIGNED"
| "CLOUDFRONT_PUBLIC"| "CLOUDFRONT_SIGNED"
| "CLOUD_CDN_PUBLIC" | "CLOUD_CDN_SIGNED"
| 'S3_PUBLIC' | 'S3_SIGNED'
| 'AZURE_CDN_PUBLIC' | 'AZURE_CDN_SIGNED'
|'AZURE_BLOB_SIGNED' |'AZURE_BLOB_PUBLIC';
}
`
$3
filename must be unique (use UUID or timestamp)
The provider handles streaming + ACL + signed URLs automatically
Cache headers are applied only if CDN is configured
$3
$3
`ts
@Post("upload-single-file")
upload(@Req() req: Request, @Res() res: Response) {
return this.service.handleUpload(req, res);
}
`
$3
Validates request content type
Reads optional query parameters
Parses multipart data using Busboy
Streams file directly to S3
Applies cache headers (if CDN enabled)
Returns response exactly once per request
`ts
async handleUpload(req: Request, res: Response) {
// ------------------- VALIDATION -------------------
if (!req.headers["content-type"]?.includes("multipart/form-data")) {
return res
.status(400)
.json({ message: "Content-Type must be multipart/form-data" });
}
let fileReceived = false;
let responded = false;
const fail = (status: number, message: string) => {
if (!responded) {
responded = true;
res.status(status).json({ message });
}
};
// ------------------- PARSE QUERY PARAMS -------------------
const legacyIsPublic = req.query.isPublic === "true";
const options = {
// visibility : public or private (default private)
visibility:
req.query.visibility === "public" || legacyIsPublic
? "public"
: "private",
// URL type: public or signed (default signed)
urlAccess: req.query.urlAccess === "public" ? "public" : "signed",
// TTL in seconds
urlTtlSeconds: req.query.signedUrlTTL
? Number(req.query.signedUrlTTL)
: undefined,
};
// ------------------- BUSBOY SETUP -------------------
const busboy = Busboy({ headers: req.headers });
busboy.on("file", (_field, file, info) => {
fileReceived = true;
if (!info.filename) {
file.resume();
return fail(400, "Filename missing");
}
// ------------------- UPLOAD -------------------
this.cloudStorage
.uploadSingleFile(file, info.filename, info.mimeType, options)
.then((result) => {
if (!responded) {
responded = true;
res.json(result);
}
})
.catch((err) =>
fail(
500,
err instanceof Error
? err.message
: "Upload failed. Please try again",
),
);
file.on("error", (err) => fail(500, File stream error: ${err.message}));
});
// ------------------- ERROR HANDLING -------------------
busboy.on("error", (err) => fail(500, Busboy error: ${err.message}));
busboy.on("finish", () => {
if (!fileReceived) {
fail(400, "No file provided");
}
});
// ------------------- STREAM REQUEST INTO BUSBOY -------------------
req.pipe(busboy);
// ------------------- CLEANUP ON DISCONNECT -------------------
req.on("close", () => {
console.log("Client disconnected.");
});
}
`
$3
Request Method: POST
URL:http://localhost:3000/upload-single-file?visibility=private&urlAccess=signed&signedUrlTTL=900
(default: visibility-private, urlAccess-signed, signedUrlTTL=3600 )
Headers:
Content-Type: multipart/form-data
$3
Go to Body → form-data
Add key file
Change type Text → File
Select a file from your computer
| Key | Type | Value |
| ---- | ---- | ------------- |
| file | File | Choose a file |
Click Send
$3
`
{
"url": "https://signed.cloud.com/uploads/my-file.pdf",
"key": "uploads/my-file.pdf",
"bucket": "my-cloud-bucket",
"filename": "my-file.pdf",
"mimeType": "application/pdf",
"isPublic": false,
"urlType": "GCS_SIGNED"
}
`
2. Multiple File Upload (Sequential / Streaming to Cloud Storage)
This module allows uploading multiple files at once to cloud storage (AWS S3 / GCS / Azure Blob).
Files are streamed directly, no server storage, works for large files.
Uploads are handled sequentially with limited concurrency to avoid overloading the server.
Provider: uploadSingleFile() (Used Internally)
Each file is uploaded one by one using uploadSingleFile()
Supports public and private files
Returns either public URL or signed URL
Handles CDN URLs (CloudFront / Cloud CDN / Azure CDN) if configured
TTL (expiry) can be set for signed URLs
$3
Receives files as streams from request
Streams each file to cloud storage
Sets visibility (public/private)
Returns an array of upload results
$3
Controller
`ts
@Post("upload-multiple-files")
uploadMultiple(@Req() req: Request, @Res() res: Response) {
return this.service.handleMultipleFilesSequentialUpload(req, res);
}
`
$3
Validates request content type
Reads optional query params per file (visibility, URL type, TTL)
Streams each file to cloud storage sequentially with limited concurrency
Returns upload result for each file
`ts
async handleMultipleFilesSequentialUpload(req: Request, res: Response) {
if (!req.headers["content-type"]?.includes("multipart/form-data")) {
return res.status(400).json({ message: "multipart/form-data required" });
}
const busboy = Busboy({ headers: req.headers });
// ---------------- CONCURRENCY CONTROL ----------------
const MAX_CONCURRENT_UPLOADS = 2;
let activeUploads = 0;
const waitQueue: (() => void)[] = [];
const acquireSlot = async () => {
if (activeUploads < MAX_CONCURRENT_UPLOADS) {
activeUploads++;
return;
}
await new Promise((resolve) => waitQueue.push(resolve));
activeUploads++;
};
const releaseSlot = () => {
activeUploads--;
const next = waitQueue.shift();
if (next) next();
};
type FileMeta = {
visibility?: "public" | "private";
urlAccess?: "public" | "signed";
urlTtlSeconds?: number;
};
const metadata: Record = {};
const pendingStreams = new Map<
number,
{
gate: PassThrough;
filename: string;
mimeType: string;
uploadStarted: boolean;
}
>();
const uploadPromises: Promise[] = [];
const results: any[] = [];
let responded = false;
const fail = (status: number, message: string) => {
if (!responded) {
responded = true;
res.status(status).json({ message });
}
};
// ---------------- START UPLOAD ----------------
const startUpload = (index: number) => {
const entry = pendingStreams.get(index);
if (!entry || entry.uploadStarted) return;
const options = metadata[index];
if (!options) return;
entry.uploadStarted = true;
const uploadPromise = (async () => {
await acquireSlot();
try {
const result = await this.cloudStorage.uploadSingleFile(
entry.gate, // streamed only after metadata
entry.filename,
entry.mimeType,
options,
);
results.push(result);
} catch (err: any) {
results.push({
filename: entry.filename,
error: err?.message ?? "Upload failed",
});
} finally {
releaseSlot();
}
})();
uploadPromises.push(uploadPromise);
};
// ---------------- FIELD HANDLING ----------------
busboy.on("field", (name, value) => {
const match = name.match(/^files\[(\d+)\]\[(.+)\]$/);
if (!match) return;
const index = Number(match[1]);
const field = match[2];
metadata[index] ??= {};
switch (field) {
case "visibility":
metadata[index].visibility =
value === "public" || value === "true" ? "public" : "private";
break;
case "urlAccess":
metadata[index].urlAccess = value === "public" ? "public" : "signed";
break;
case "urlTtlSeconds":
const ttl = Number(value);
if (!Number.isNaN(ttl) && ttl > 0) {
metadata[index].urlTtlSeconds = ttl;
}
break;
}
// If file already arrived, start upload now
startUpload(index);
});
// ---------------- FILE HANDLING ----------------
busboy.on("file", (name, file, info) => {
const match = name.match(/^files\[(\d+)\]\[file\]$/);
if (!match) {
file.resume();
return;
}
const index = Number(match[1]);
const gate = new PassThrough();
pendingStreams.set(index, {
gate,
filename: info.filename,
mimeType: info.mimeType || "application/octet-stream",
uploadStarted: false,
});
// Connect file → gate immediately
// Upload starts only when gate is consumed
file.pipe(gate);
// If metadata already arrived, start upload
startUpload(index);
});
// ---------------- FINISH ----------------
busboy.on("finish", async () => {
try {
await Promise.all(uploadPromises);
if (!responded) {
responded = true;
res.json({
total: results.length,
filesUploaded: results,
});
}
} catch (err: any) {
fail(500, err.message);
}
});
busboy.on("error", (err) => fail(500, Busboy error: ${err.message}));
req.pipe(busboy);
}
`
$3
Request Method: POST
URL:http://localhost:3000/upload-multiple-files
Headers:
Content-Type: multipart/form-data
$3
In this API, file upload does not start until its metadata is available.
What happens internally:
When a file arrives before metadata, the server pauses the file stream
The file waits in memory until metadata is received
Upload starts only after metadata is known
To avoid pausing file streams and make uploads faster, send metadata first.
When metadata is sent first:
File streams start uploading immediately
No pause / resume
Better performance, especially for large files
Recommended Postman Usage (For Best Performance)
Send metadata first, then files.
$3
Go to Body → form-data
Add multiple keys for each file:
$3
| Key | Type | Value | Notes |
| ----------------------- | ---- | ---------------- | ----------------------- | ----------------- |
| files[0][visibility] | Text | public / private | Optional | (default private) |
| files[0][urlAccess] | Text | public / signed | Optional | (default signed) |
| files[0][urlTtlSeconds] | Text | 900 | Optional signed URL TTL | (default 3600) |
| files[1][visibility] | Text | public / private | Optional |
| files[1][urlAccess] | Text | public / signed | Optional |
| files[1][urlTtlSeconds] | Text | 600 | Optional |
$3
| Key | Type | Value | Notes |
| -------------- | ---- | ------------- | -------------- |
| files[0][file] | File | Select file 1 | File to upload |
| files[1][file] | File | Select file 2 | File to upload |
Click Send
Important: Each file must have a key like files[index][file]. Optional metadata can be added per file.
$3
`
{
"total": 2,
"filesUploaded": [
{
"url": "https://signed.example.com/uploads/uuid-a.pdf",
"key": "uploads/uuid-a.pdf",
"bucket": "my-bucket",
"filename": "a.pdf",
"mimeType": "application/pdf",
"isPublic": false,
"urlType": "S3_SIGNED"
},
{
"url": "https://cdn.example.com/uploads/uuid-b.jpg",
"key": "uploads/uuid-b.jpg",
"bucket": "my-bucket",
"filename": "b.jpg",
"mimeType": "image/jpeg",
"isPublic": true,
"urlType": "AKAMAI_PUBLIC"
}
]
}
`
3. Delete Single File from Cloud Storage
This module deletes one file from cloud storage
(AWS S3 / Google Cloud Storage / Azure Blob Storage).
Deletion is done using the storage key returned during upload.
$3
Deletes one file using its storage key
Works for AWS, GCS, and Azure
Safe to call even if the file does not exist
Returns deletion status
How it works :
Receives file key
Deletes file from cloud storage
Returns delete status
$3
$3
- key — File path / object key in cloud storage
$3
`
{
key: string; // File key
bucket: string; // Bucket / container name
deleted: boolean; // true if deleted successfully
error?: string; // Error message (if any)
}
`
$3
$3
`ts
@Delete("delete-single-file")
async deleteFile(@Body("key") key: string) {
return await this.service.deleteSingleFile(key);
}
`
$3
- Calls the provider function
- Handles any errors thrown by the provider
`ts
async deleteSingleFile(key: string) {
return await this.cloudStorage.deleteSingleFile(key);
}
`
$3
Request
Method: DELETE
URL: http://localhost:3000/delete-single-file
Headers: Content-Type: application/json
Body (raw JSON):
`
{
"key": "uploads/uuid-file.pdf"
}
`
Click Send
$3
`
{
"key": "uploads/my-file.pdf",
"bucket": "my-cloud-bucket",
"deleted": true
}
`
4. Delete Multiple Files from Cloud Storage
This module deletes multiple files in one request from cloud storage
(AWS S3 / Google Cloud Storage / Azure Blob Storage).
Each file is deleted using its storage key.
$3
Deletes multiple files in a single call
Works for AWS, GCS, and Azure
Each file is handled independently
Returns delete status for each file
$3
Receives a list of file keys
Deletes each file from cloud storage
Returns result for every file
$3
Input Parameters
`
{
key: string; // File key in cloud storage
}[]
`
Returns
`
{
key: string; // File key
bucket: string; // Bucket / container name
deleted: boolean; // true if deleted successfully
error?: string; // Error message (if any)
}[]
`
$3
$3
`ts
@Delete("delete-multiple-files")
async deleteMultiple(@Body() files: { key: string }[]) {
return await this.service.deleteMultipleFiles(files);
}
`
$3
- Calls the provider function
- Returns an array of deletion results for all requested files
`ts
async deleteMultipleFiles(
files: DeleteFileRequest[],
): Promise {
return this.cloudStorage.deleteMultipleFiles(files);
}
`
$3
Request
Method: DELETE
URL: http://localhost:3000/delete-multiple
Headers: Content-Type: application/json
Body (raw JSON):
`
[
{ "key": "uploads/file1.pdf" },
{ "key": "uploads/file2.jpg" }
]
`
Click Send
$3
`
[
{
"key": "uploads/file1.pdf",
"bucket": "my-cloud-bucket",
"deleted": true
},
{
"key": "uploads/file2.jpg",
"bucket": "my-cloud-bucket",
"deleted": false,
"error": "File not found"
}
]
`
5.Download Multiple Files as ZIP (Streaming from cloud storage)
This API downloads multiple files from cloud storage and returns them as one ZIP file.
Works with AWS S3 / GCS / Azure.
Provider: createZipStreamForMultipleFileDownloads()
$3
Takes a list of file keys
Fetches files from cloud storage
Streams them into a ZIP file
Returns the ZIP as a stream
No files are stored on the server.
$3
Input Parameters
`
{
key: string; // File key in cloud storage
saveAs?: string; // Optional filename inside ZIP
}[]
`
Returns
- A Readable stream containing the ZIP archive
- Can be piped directly to HTTP response
Controller
`ts
@Post("download-zip")
async downloadZip(
@Body() files: { key: string; saveAs?: string }[],
@Res() res: Response
) {
const zipStream = await this.service.getZipStream(files);
res.setHeader("Content-Type", "application/zip");
res.setHeader("Content-Disposition", 'attachment; filename="files.zip"');
zipStream.pipe(res);
}
`
Service
`ts
async getZipStream(files: DownloadFileRequest[]) {
return this.cloudStorage.createZipStreamForMultipleFileDownloads(files);
}
`
$3
Do NOT use / in saveAs
ZIP treats / as folder separator
How to Test in Postman
Request
Method: POST
URL: http://localhost:3000/download-zip
Headers: Content-Type: application/json
Body (raw JSON):
`
[
{ "key": "uploads/file1.pdf", "saveAs": "file1.pdf" },
{ "key": "uploads/file2.jpg", "saveAs": "image2.jpg" }
]
`
Click Send And Download
#### IMPORTANT
Total ZIP size is validated against MAX_ZIP_SIZE (2 GB)
ZIP contains all files at root level
6.Download Single File (Streaming from cloud storage)
This API downloads one file from cloud storage and streams it directly to the client.
Works with AWS S3 / GCS / Azure.
Provider: downloadSingleFile()
$3
What it does (simple):
Reads a file from cloud storage using its key
Streams it directly to the response
Optionally renames the downloaded file using saveAs
No file is stored on the server.
$3
Input Parameters
`
{
key: string; // Required: file key in cloud storage
saveAs?: string; // Optional: download filename
}
`
Returns
void // file is streamed to response
Controller
`ts
@Post("download-single-file")
async downloadFile(
@Body() file: { key: string; saveAs?: string },
@Res() res: Response,
) {
if (!file.key) {
return res.status(400).json({ message: "key is required" });
}
try {
await this.service.download(file.key, res, file.saveAs);
} catch (err: any) {
if (!res.headersSent) {
res.status(404).json({ message: err.message });
}
}
}
`
Service
`ts
async download(key: string, res: Response, saveAs?: string) {
return this.cloudStorage.downloadSingleFile({ key, saveAs }, res);
}
`
$3
Do NOT use / in saveAs
/ is treated as a folder path by browsers and servers
How to Test in Postman
Request
Method: GET
URL: http://localhost:3000/download-single-file
Headers: Content-Type: application/json
Body (raw JSON):
`
{
"key": "uploads/report.pdf",
"saveAs": "my-report.pdf"
}
``