Modern React 3D avatar component with chat and lip-sync capabilities
npm install @marcosremar/cabecaoModern React 3D avatar component with real-time chat, WebSocket support, and lip-sync capabilities.
- 🎠Real-time 3D Avatar: Powered by React Three Fiber and Three.js
- 🎤 Voice Chat: WebSocket-based real-time audio processing
- 👄 Lip-sync: Automatic mouth movement synchronization
- 😊 Facial Expressions: Dynamic facial expressions based on content
- 🎨 Customizable: Easy configuration for gaze direction, animations, and more
- 📦 Streaming: Efficient chunk-based audio streaming
- 🔊 VAD: Voice Activity Detection for hands-free interaction
``bash`
npm install @marcosremar/cabecao
Make sure to install the required peer dependencies:
`bash`
npm install @react-three/drei @react-three/fiber @ricky0123/vad-react leva react react-dom socket.io-client three
`jsx
import { Cabecao } from '@marcosremar/cabecao';
function App() {
return (
apiUrl="http://localhost:4001"
/>
);
}
`
| Prop | Type | Default | Description |
|------|------|---------|-------------|
| wsUrl | string | "http://localhost:4002" | WebSocket server URL |apiUrl
| | string | "http://localhost:4001" | REST API server URL |r2Url
| | string | undefined | Cloudflare R2 URL for models |modelPath
| | string | undefined | Custom model path |showControls
| | boolean | false | Show Leva controls |autoStartVAD
| | boolean | false | Auto-start voice detection |showStartButton
| | boolean | true | Show start button |vadEnabled
| | boolean | true | Enable voice activity detection |gazeConfig
| | object | See below | Eye gaze configuration |style
| | object | {} | Custom CSS styles |className
| | string | undefined | CSS class name |
`jsx
const gazeConfig = {
enabled: true,
talking0: {
rightIntensity: 0.15, // 0-1, how much to look right
downIntensity: 0.08 // 0-1, how much to look down
}
};
`
`jsx`
r2Url="https://your-r2-bucket.com"
wsUrl="ws://your-websocket-server"
/>
`jsx`
width: '100%',
height: '500px',
borderRadius: '12px'
}}
className="my-avatar"
showControls={true}
/>
The component expects a WebSocket server that handles:
: Audio data with sample rate
`js
{
audio: Float32Array,
sampleRate: 16000
}
`$3
- audio-chunk: Audio response with visemes
`js
{
text: "Hello there!",
audio: "data:audio/webm;codecs=opus;base64,UklGRiQF...",
visemes: [
{ v: "X", start: 0, end: 100 },
{ v: "H", start: 100, end: 200 }
],
animation: "Talking_0",
facialExpression: "smile"
}
`-
chat-error: Error handling
`js
{
error: "Error message"
}
`Animations
Supported animations:
-
Idle - Default idle animation
- Talking_0 - Primary talking animation
- Talking_1 - Secondary talking animation
- Talking_2 - Tertiary talking animationFacial Expressions
Supported expressions:
-
default - Neutral expression
- smile - Happy/positive expression
- sad - Sad/negative expression
- surprised - Surprised expression
- angry - Angry expressionVisemes
The component supports standard visemes A-H and X for lip-sync:
-
A - Bilabial sounds (P, B, M)
- B - Velar sounds (K, G)
- C - Vowel I
- D - Vowel A
- E - Vowel O
- F - Vowel U
- G - Fricative sounds (F, V)
- H - Dental sounds (TH, T, D)
- X - SilenceExample Backend Integration
`js
// Express + Socket.IO server example
const express = require('express');
const { Server } = require('socket.io');
const http = require('http');const app = express();
const server = http.createServer(app);
const io = new Server(server);
io.on('connection', (socket) => {
socket.on('chat', async (data) => {
const { audio, sampleRate } = data;
// Process audio, generate response
const response = await processAudio(audio, sampleRate);
// Send chunk with visemes
socket.emit('audio-chunk', {
text: response.text,
audio: response.audioBase64,
visemes: response.visemes,
animation: response.animation,
facialExpression: response.facialExpression
});
});
});
server.listen(4002);
`Development
`bash
Clone the repository
git clone https://github.com/marcosremar/cabecao-npm.git
cd cabecao-npmInstall dependencies
npm install --legacy-peer-depsBuild the package
npm run buildTest locally
npm link
`Contributing
1. Fork the repository
2. Create your feature branch (
git checkout -b feature/amazing-feature)
3. Commit your changes (git commit -m 'Add some amazing feature')
4. Push to the branch (git push origin feature/amazing-feature`)MIT © Marcos Remar