A Motion Canvas component for handling narration and subtitles with AI provider support
npm install motion-canvas-narratorMotion Canvas Narrator seamlessly integrates narration into your Motion Canvas workflow.
Inspired by Motion Canvas' idea of letting your code define your animations, this package allows you to define narrations in code, making it easy to synchronize voiceovers and subtitles with your animations.
You define your narrations and let them guide you through your voice recordings while displaying subtitles in the editor - or you let AI generate the audio for you.
Please note that this package is still in early development, so some bugs and missing features are expected. Contributions and suggestions are highly welcome!
The source code is available here: Example Project
| Provider | TTS | Voice Cloning | Fine Grained Timestamps | Remarks |
|-----------------------|:---------:|:---------------------:|:-------------------------------:|-------------------------------------|
| ElevenLabs TTS | ✅ | ✅ | | Requires Account |
| ElevenLabs Sound | ✅ | | | Sound effects, Requires Account |
| Google Vertex AI | wip | | | Requires Google Cloud Project |
| Minimax | ✅ | | | Requires Account |
| File Provider | ✅ | N/A | | Load from local files |
| Mock Provider | ✅ | N/A | | For testing/planning (no audio) |
Other potential providers:
- piper1: GPL licensed, can be easily set up on your machine.
- Web Speech API: Built-in browser TTS, no API key required, but does not support exporting audio files.
- speechify
- In-Editor Recording: Record your own narrations directly in the Motion Canvas editor.
- Subtitles: Display precise subtitles with your narrations (check out the example project for rudimentary subtitles).
- Caption Export: Export subtitles in various formats (e.g., WebVTT).
- Detailed Timestamping: Timestamps for individual characters and words allow better synchronization and subtitles (example).
Using Motion Canvas Narrator in your Motion Canvas project is straightforward and only requires a few steps to set up.
You can also check out the example project that includes subtitles used for the demo video here: Example Project
``bash`
npm install https://github.com/prathje/motion-canvas-narrator.git
First, install the cache plugin package:
`bash`
npm install motion-canvas-cache
Then add the plugin to your vite.config.ts file:
`typescript
import {defineConfig} from 'vite';
import motionCanvas from '@motion-canvas/vite-plugin';
import ffmpeg from '@motion-canvas/ffmpeg';
import { motionCanvasCachePlugin } from 'motion-canvas-cache/vite-plugin';
export default defineConfig({
plugins: [
motionCanvas(),
ffmpeg(), // make sure that you setup ffmpeg to export audio as well
// Add the cache plugin for server-side audio caching:
motionCanvasCachePlugin(),
]
});
`
#### Using ElevenLabs TTS:
`typescript`
import { createElevenLabsNarrator } from 'motion-canvas-narrator';
const narrator = createElevenLabsNarrator({
modelId: 'eleven_v3',
voiceId: 'JBFqnCBsd6RMkjVDRZzb',
apiKey: '
});
#### Using Google Vertex AI (requires @google-cloud/vertexai):`typescript`
import { createVertexAINarrator } from 'motion-canvas-narrator';
const narrator = createVertexAINarrator({
projectId: 'your-google-cloud-project',
voiceName: 'Puck', // Options: Kore, Puck, Charon, Aoede
instruction: 'Speak naturally' // Optional voice instruction
});
#### Using Minimax:
`typescript`
import { createMinimaxNarrator } from 'motion-canvas-narrator';
const narrator = createMinimaxNarrator({
apiKey: '
voiceId: 'your-voice-id'
});
#### Using Mock Provider (for testing):
`typescript`
import { createMockNarrator } from 'motion-canvas-narrator';
const narrator = createMockNarrator({
wordsPerMinute: 150 // Optional, defaults to 120
});
typescript
yield* narrator.speak("Welcome!");
`
This will generate frames for the duration of the narration.
Note that the narration seamlessly integrates with Motion Canvas' animation system, allowing you to synchronize animations with the narration, using all for example:`typescript
yield* all(
// ... other animations ...
narrator.speak("Welcome!")
);
`$3
You can also customize the narration further by providing additional options such as volume and playback rate:
`typescript
yield* narrator.speak("Hello, world!", { gain: -5.2, playbackRate: 1.2 });
``If you'd like to contribute to this project, please feel free to open issues or pull requests.