AWS S3 implementation of Virtual File System
npm install @firesystem/s3AWS S3 implementation of the Firesystem Virtual File System. Store and manage files in Amazon S3 buckets with a familiar file system API. Built on top of @firesystem/core's BaseFileSystem, providing full compatibility with the Firesystem ecosystem including reactive events and multi-project workspaces.
- 🌐 Full S3 Integration - Seamless read/write operations with S3 buckets
- 🔄 Dual Mode Operation
- Strict Mode: Full filesystem compatibility with directory markers
- Lenient Mode: Works with existing S3 buckets without modifications
- 📁 Virtual Directories - Full directory support using S3 prefixes
- 🏷️ Rich Metadata - Store custom metadata with S3 object tags
- 🔍 Prefix Isolation - Scope operations to specific bucket prefixes
- 📡 Reactive Events - Real-time notifications for all operations
- 🔐 Full TypeScript - Complete type safety and IntelliSense
- 🚀 Production Ready - Battle-tested with comprehensive test coverage
- 🏗️ BaseFileSystem - Extends core BaseFileSystem for consistency
- 🔌 Workspace Compatible - First-class support for @workspace-fs/core
- ⚡ Event System - Full reactive event support via TypedEventEmitter
``bash`
npm install @firesystem/s3or
yarn add @firesystem/s3or
pnpm add @firesystem/s3
`typescript
import { S3FileSystem } from "@firesystem/s3";
// Create filesystem instance
const fs = new S3FileSystem({
bucket: "my-bucket",
region: "us-east-1",
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
},
});
// Initialize (required for strict mode)
await fs.initialize();
// Use like any filesystem
await fs.writeFile("/hello.txt", "Hello, S3!");
await fs.mkdir("/documents");
await fs.writeFile("/documents/report.pdf", binaryData);
`
`typescript
import { WorkspaceFileSystem } from "@workspace-fs/core";
import { s3Provider } from "@firesystem/s3/provider";
// Register S3 provider
const workspace = new WorkspaceFileSystem();
workspace.registerProvider(s3Provider);
// Load S3 project
const project = await workspace.loadProject({
id: "cloud-storage",
name: "Cloud Storage",
source: {
type: "s3",
config: {
bucket: "my-bucket",
region: "us-east-1",
},
},
});
// Use through project
await project.fs.writeFile("/data.json", { value: 42 });
const file = await fs.readFile("/hello.txt");
console.log(file.content); // "Hello, S3!"
const files = await fs.readDir("/documents");
console.log(files); // [{ name: "report.pdf", ... }]
`
`typescript`
const fs = new S3FileSystem({
bucket: "my-bucket", // Required: S3 bucket name
region: "us-east-1", // Required: AWS region
credentials: {
// Required: AWS credentials
accessKeyId: "...",
secretAccessKey: "...",
},
prefix: "/app/data/", // Optional: Scope to bucket prefix
mode: "strict", // Optional: "strict" or "lenient"
});
Use lenient mode to work seamlessly with existing S3 buckets:
`typescript
const fs = new S3FileSystem({
bucket: "existing-bucket",
region: "us-west-2",
mode: "lenient", // No directory markers needed
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
},
});
// Works with existing S3 structure
const files = await fs.readDir("/");
// Returns virtual directories inferred from object keys
`
Isolate your filesystem to a specific bucket prefix:
`typescript
const fs = new S3FileSystem({
bucket: "shared-bucket",
region: "eu-west-1",
prefix: "/tenants/customer-123/",
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
},
});
// All operations are scoped to the prefix
await fs.writeFile("/config.json", { version: "1.0" });
// Actually writes to: s3://shared-bucket/tenants/customer-123/config.json
`
Works with S3-compatible services like MinIO, Wasabi, or DigitalOcean Spaces:
`typescript`
const fs = new S3FileSystem({
bucket: "my-bucket",
region: "us-east-1",
credentials: {
accessKeyId: "minioadmin",
secretAccessKey: "minioadmin",
},
clientOptions: {
endpoint: "http://localhost:9000",
forcePathStyle: true, // Required for MinIO
},
});
`typescript
// Write text file
await fs.writeFile("/notes.txt", "My notes");
// Write JSON
await fs.writeFile("/config.json", {
name: "myapp",
version: "1.0.0",
});
// Write binary data
const buffer = new ArrayBuffer(1024);
await fs.writeFile("/data.bin", buffer);
// Read file
const file = await fs.readFile("/notes.txt");
console.log(file.content); // "My notes"
console.log(file.size); // 8
console.log(file.created); // Date object
// Delete file
await fs.deleteFile("/notes.txt");
// Check existence
const exists = await fs.exists("/notes.txt"); // false
`
`typescript
// Create directory
await fs.mkdir("/projects");
// Create nested directories
await fs.mkdir("/projects/2024/january", true);
// List directory contents
const entries = await fs.readDir("/projects");
// [
// { name: "2024", type: "directory", ... }
// ]
// Remove empty directory
await fs.rmdir("/projects/temp");
// Remove directory recursively
await fs.rmdir("/projects/old", true);
`
`typescript
// Copy files
await fs.copy("/template.docx", "/documents/new.docx");
// Move/rename files
await fs.rename("/old-name.txt", "/new-name.txt");
// Move multiple files
await fs.move(["/file1.txt", "/file2.txt"], "/archive/");
// Get file stats
const stats = await fs.stat("/large-file.zip");
console.log(stats.size); // File size in bytes
console.log(stats.modified); // Last modified date
// Search with glob patterns
const jsFiles = await fs.glob("*/.js");
const testFiles = await fs.glob("*/test-.js");
const rootFiles = await fs.glob("*"); // Root level only
`
S3FileSystem extends BaseFileSystem and provides a full reactive event system:
`typescriptFile ${path} uploaded to S3 (${size} bytes)
// File operation events
fs.events.on(FileSystemEvents.FILE_WRITTEN, ({ path, size }) => {
console.log();
});
fs.events.on(FileSystemEvents.FILE_READ, ({ path, size }) => {
console.log(File ${path} downloaded from S3 (${size} bytes));
});
fs.events.on(FileSystemEvents.FILE_DELETED, ({ path }) => {
console.log(File ${path} removed from S3);
});
// Operation tracking
fs.events.on(FileSystemEvents.OPERATION_START, ({ operation, path, id }) => {
console.log(Starting ${operation} on ${path});
});
fs.events.on(
FileSystemEvents.OPERATION_END,
({ operation, path, duration }) => {
console.log(Completed ${operation} on ${path} in ${duration}ms);
},
);
fs.events.on(FileSystemEvents.OPERATION_ERROR, ({ operation, path, error }) => {
console.error(Operation ${operation} failed on ${path}:, error);
});
// Initialization events
fs.events.on(FileSystemEvents.INITIALIZED, ({ duration }) => {
console.log(S3 filesystem initialized in ${duration}ms);
});
// Watch for changes (client-side simulation)
const watcher = fs.watch("*/.json", (event) => {
console.log(File ${event.path} was ${event.type});
});
// Stop watching
watcher.dispose();
`
`typescript
// Write file with metadata
await fs.writeFile("/document.pdf", pdfBuffer, {
tags: ["important", "contract"],
author: "John Doe",
department: "Legal",
});
// Read file with metadata
const file = await fs.readFile("/document.pdf");
console.log(file.metadata);
// { tags: ["important", "contract"], author: "John Doe", ... }
`
| Feature | Strict Mode | Lenient Mode |
| ------------------------- | ---------------------- | ---------------- |
| Directory markers | Creates .../ objects | Virtual only |
| Parent directory check | Required | Not enforced |
| Existing S3 compatibility | Requires markers | Works with any |
| Performance | More S3 requests | Fewer requests |
| Best for | New applications | Existing buckets |
S3FileSystem extends BaseFileSystem from @firesystem/core, inheriting:
- Standard permission checks (canModify, canCreateIn)
- Atomic write simulation via temp files
- Consistent error handling
- Path normalization utilities
- Strict Mode: Creates empty objects with "/" suffix as directory markers
- Lenient Mode: Directories are virtual and inferred from object prefixes
- JSON Objects: Automatically stringified on write and parsed on read
- Binary Content: ArrayBuffer is encoded as base64 for storage
- Text Content: Stored as-is in UTF-8 encoding
- Large Files: Supports up to 5TB with multipart upload (future enhancement)
Firesystem metadata is stored as S3 object metadata:
- x-amz-meta-type: "file" or "directory"x-amz-meta-created
- : ISO date stringx-amz-meta-modified
- : ISO date stringx-amz-meta-custom
- : JSON stringified custom metadata
Full reactive event support via TypedEventEmitter:
- Operation lifecycle events (start, end, error)
- File operation events (read, written, deleted)
- Directory operation events (created, deleted)
- Storage events (cleared, size calculated)
- Initialization events (initializing, initialized)
1. Use prefixes to limit the scope of list operations
2. Enable lenient mode for existing buckets to reduce requests
3. Batch operations when possible to minimize API calls
4. Cache frequently accessed files locally
5. Use glob patterns carefully - they require listing many objects
The package includes comprehensive test coverage:
- ✅ Core functionality: 100% tested
- ✅ S3-specific features: Fully tested
- ✅ Cross-provider compatibility: 87% of shared tests passing
1. Large Files: Currently loads entire file content into memory
2. List Performance: S3 LIST operations can be slow with many objects
3. Atomic Operations: S3 doesn't support true atomic operations
4. Permissions: S3 permissions are not mapped to file system permissions
5. Watch Events: File watching is client-side only (no server push from S3)
6. Case Sensitivity: S3 keys are case-sensitive, unlike some file systems
S3FileSystem is a first-class citizen in the Firesystem workspace ecosystem. This enables powerful multi-project workflows with S3 storage.
`typescript
import { WorkspaceFileSystem } from "@workspace-fs/core";
import { s3Provider } from "@firesystem/s3/provider";
// Setup workspace
const workspace = new WorkspaceFileSystem();
workspace.registerProvider(s3Provider);
await workspace.initialize();
// Load multiple S3 projects
const production = await workspace.loadProject({
id: "prod-data",
name: "Production Data",
source: {
type: "s3",
config: {
bucket: "prod-bucket",
region: "us-east-1",
mode: "lenient", // Works with existing S3 data
},
},
});
const backup = await workspace.loadProject({
id: "backup-data",
name: "Backup Storage",
source: {
type: "s3",
config: {
bucket: "backup-bucket",
region: "us-west-2",
prefix: "/daily-backups/",
},
},
});
`
`typescript/backup-${Date.now()}.json
// Copy between S3 buckets
const data = await production.fs.readFile("/current/data.json");
await backup.fs.writeFile(, data.content);
// Sync from production to backup
await workspace.copyFiles(
"prod-data",
"/reports/*.pdf",
"backup-data",
"/reports/",
);
// Mix S3 with other storage types
const local = await workspace.loadProject({
id: "local-cache",
source: { type: "indexeddb", config: { dbName: "cache" } },
});
// Download from S3 to local browser storage
const s3File = await production.fs.readFile("/large-dataset.json");
await local.fs.writeFile("/cached-dataset.json", s3File.content);
`
The S3 provider supports credential resolution from environment:
`bashAWS credentials
export AWS_ACCESS_KEY_ID=your_key_id
export AWS_SECRET_ACCESS_KEY=your_secret_key
export AWS_REGION=us-east-1
$3
`typescript
const provider = workspace.getProvider("s3");
console.log(provider.getCapabilities());
// {
// readonly: false,
// caseSensitive: true,
// atomicRename: false,
// supportsWatch: false,
// supportsMetadata: true,
// supportsGlob: false,
// maxFileSize: 5497558138880, // 5TB
// maxPathLength: 1024,
// description: "AWS S3 cloud storage with eventual consistency..."
// }
``Contributions are welcome! Please feel free to submit a Pull Request.
MIT © Anderson D. Rosa
- @firesystem/core - Core interfaces
- @firesystem/memory - In-memory implementation
- @firesystem/indexeddb - Browser storage
- @workspace-fs/core - Multi-project support