Node.js SDK for Valence AI Emotion Detection API - Real-time, Async, and Streaming Support
npm install valenceaivalenceai is a Node.js SDK for interacting with the Valence AI API for emotion analysis. It provides a convenient interface to upload audio files, stream real-time audio, and retrieve detected emotional states.
- Discrete audio processing - Real-time analysis for short audio clips
- Asynch audio processing - Multipart parallel upload for long audio files with temporal emotion analysis
- Streaming API - Real-time WebSocket streaming for live audio
- Rate limiting - Monitor API usage and limits
- Environment configuration - Built-in support for .env files
- Enhanced logging - Configurable log levels with timestamps
- TypeScript ready - Full JSDoc documentation for all functions
The emotional classification model used in our APIs is optimized for North American English conversational data. The included model detects four emotions: angry, happy, neutral, and sad. _New models coming soon_.
| API | Best For | Input | Output |
|-----|----------|-------|--------|
| Discrete | Real-time analysis | Short audio (4-10s) | Single emotion prediction |
| Asynch | Pre-recorded files | Long audio (up to 1GB) | Timeline with emotion changes |
| Streaming | Live audio streams | Audio chunks via WebSocket | Real-time emotion updates |
The DiscreteAPI is built for real-time analysis of emotions in audio data. Small snippets of audio are sent to the API to receive feedback in real-time of what emotions are detected based on tone of voice. This API operates on an approximate per-sentence basis, and audio must be cut to the appropriate size.
The AsynchAPI is built for emotion analysis of pre-recorded audio files. Files of any length, up to 1 GB in size, can be sent to the API to receive a timeline of emotions throughout the file.
The StreamingAPI is built for real-time audio analysis via WebSocket connections. The audio stream is analyzed in real-time and emotions are returned in reference to 5-second chunks of streamed audio.
- Format: WAV only
- Recommended sampling rate: 44.1 kHz (44100 Hz)
- Minimum sampling rate: 8 kHz
- Channel: Mono (single channel)
- Discrete API: Minimum 4.5 seconds per file, maximum 15 seconds. 5-10 seconds recommended.
- Asynch API: Minimum 5 seconds, maximum 1 GB
- Streaming API: Real-time audio chunks (Buffer or ArrayBuffer)
For inquiries about custom microphone specifications or stereo/multi-channel support, please contact us.
``bash`
npm install valenceai
Create a .env file in your project root:
`env`
VALENCE_API_KEY=your_api_key # Required
VALENCE_API_BASE_URL=https://api.getvalenceai.com # Optional
VALENCE_WEBSOCKET_URL=wss://api.getvalenceai.com # Optional
VALENCE_LOG_LEVEL=info # Optional: debug, info, warn, error
`javascript`
const client = new ValenceClient({
apiKey: 'your_api_key', // API key (required)
baseUrl: 'https://custom.api', // Custom API endpoint (optional)
websocketUrl: 'wss://custom.api', // Custom WebSocket endpoint (optional)
partSize: 5 1024 1024, // Upload chunk size (default: 5MB)
maxRetries: 3, // Max retry attempts (default: 3)
comprehensiveOutput: false // When false: asynch API returns timestamp, main_emotion, confidence only.
// When true: also includes all_predictions with all emotion confidences (default: false)
});
The Asynch API uses a multi-step process to handle long audio files. Understanding this workflow is crucial for proper implementation:
When you call client.asynch.upload(filePath):
- SDK splits your file into parts (5MB chunks by default)
- Uploads parts in parallel
- Returns a requestId - This is a tracking identifier, not a completion signal.
- At this point: File is uploaded to our server, but _NOT processed yet_.
After upload completes, the server automatically:
- Checks for new uploads
- Downloads audio when a new File is detected
- Splits audio into 5-second segments
- Processes audio file
- Invokes machine learning model for emotion detection
- Stores results in database
- Updates status to completed
Processing Time: Varies based on file length and server load. Typically 1-5 seconds per minute of audio. Upload time varies based on your network speed.
When you call client.asynch.emotions(requestId):
- Polls the status endpoint at regular intervals
- Waits for status progression:
- initiated → Upload startedupload_completed
- → File uploaded (processing not started)processing
- → Background processing in progresscompleted
- → Results readycompleted
- Returns emotion timeline when status is
| Status | Meaning | What's Happening |
|--------|---------|------------------|
| initiated | Upload started | SDK is uploading file in parts |upload_completed
| | Upload finished | File is waiting for background processor |processing
| | Processing active | Server is analyzing audio |completed
| | Results ready | Emotion timeline is available |
- The requestId is NOT a completion indicator. It's a request tracking ID.upload()
- completing does not mean results are ready. It means the file is uploaded.requestId
- Background processing takes time. Processing time varies based on file length and server load.
- You can check status anytime. The remains valid for retrieving results until databases are cleared (see: DPA for more information on data retention policies).
`javascript
import { ValenceClient } from 'valenceai';
// Initialize client
const client = new ValenceClient({ apiKey: 'your_api_key' });
// Discrete API - Quick emotion detection
const result = await client.discrete.emotions('short_audio.wav');
console.log(Emotion: ${result.main_emotion});
// Asynch API - Long audio with timeline
// Step 1: Upload file (returns tracking ID)
const requestId = await client.asynch.upload('long_audio.wav');
// Step 2: Wait for server processing and get results (polls until complete)
const emotions = await client.asynch.emotions(requestId, 30, 10000);
// Step 3: Access emotion data from results
const emotionList = emotions.emotions; // List of emotion predictions with timestamps
// Get summary statistics
const majority = await client.asynch.majorityEmotion(requestId); // Most frequent emotion
const counts = await client.asynch.emotionCounts(requestId); // { happy: 10, sad: 3, ... }
// Streaming API - Real-time audio
const stream = client.streaming.connect();
stream.on('prediction', (data) => console.log(data.main_emotion));
stream.connect();
stream.sendAudio(audioBuffer);
stream.disconnect();
// Rate Limit API - Monitor usage
const status = await client.rateLimit.getStatus();
const health = await client.rateLimit.getHealth();
`
For short audio files requiring immediate emotion detection.
`javascript
// Direct file upload
const result = await client.discrete.emotions('audio.wav');
// Upload via in-memory audio array
const result = await client.discrete.emotions([0.17278, 0.23738, 0.37912, ...]);
`
Response:
`javascript`
{
emotions: {
happy: 0.78,
sad: 0.12,
angry: 0.08,
neutral: 0.15
},
main_emotion: 'happy'
}
For long audio files with timeline analysis.
Status Progression: initiated → upload_completed → processing → completed
#### Upload Audio
`javascript`
// Upload file (multipart upload, automatically validates file size)
const requestId = await client.asynch.upload('long_audio.wav');
Note: The SDK automatically validates file size against your rate limit policy before upload. If the file exceeds the maximum allowed size, a FileSizeLimitExceededError is thrown without attempting the upload. Default maximum is 1GB when no rate limit policy is configured.
#### Get Emotion Results
`javascript`
// Poll for results until processing completes
const result = await client.asynch.emotions(
requestId,
20, // maxTries (default: 20, range: 1-100)
5000 // intervalMs (default: 5000, range: 1000-60000)
);
// This method waits for server processing to complete
// Returns when status is 'completed'
Response:
`javascript`
{
emotions: [
{
timestamp: 0.5,
start_time: 0.0,
end_time: 1.0,
emotion: 'happy',
confidence: 0.9,
all_predictions: { happy: 0.9, sad: 0.1, ... }
},
{
timestamp: 1.5,
start_time: 1.0,
end_time: 2.0,
emotion: 'neutral',
confidence: 0.85,
all_predictions: { neutral: 0.85, happy: 0.15, ... }
}
],
status: 'completed'
}
Note: The all_predictions field is only included when comprehensiveOutput: true is set in the client constructor.
#### Helper Methods
`javascript
// Get the most frequently occurring emotion across the entire file
const majority = await client.asynch.majorityEmotion(requestId);
// Returns: "happy"
// Get emotion occurrence counts for the entire file
const counts = await client.asynch.emotionCounts(requestId);
// Returns: { happy: 10, sad: 3, angry: 8, neutral: 9 }
`
For real-time emotion detection on live audio streams.
`javascript
// Create streaming connection
const stream = client.streaming.connect();
// Register event handlers
stream.on('prediction', (data) => {
console.log(Emotion: ${data.main_emotion});
});
stream.on('error', (error) => {
console.error(Error: ${error.message});
});
stream.on('connected', (info) => {
console.log(Connected: ${info.session_id});
});
// Connect to WebSocket
await stream.connect();
// Send audio chunks (Buffer or ArrayBuffer)
stream.sendAudio(audioBuffer);
// Check connection status
if (stream.connected) {
console.log('Streaming active');
}
// Disconnect
stream.disconnect();
`
Prediction Event:
`javascript`
{
main_emotion: 'happy',
confidence: 0.87,
all_predictions: {
happy: 0.87,
sad: 0.05,
angry: 0.03,
neutral: 0.15
},
timestamp: 1706486400000 // Unix timestamp (UTC) in milliseconds
}
The timestamp is a Unix timestamp (UTC) in milliseconds representing when the server generated the prediction.
Monitor your API usage and limits.
`javascript
// Get rate limit status
const status = await client.rateLimit.getStatus();
console.log(status);
// {
// policy_name: 'standard_policy',
// limits: {
// requests_per_second: 10,
// requests_per_minute: 100,
// requests_per_hour: 1000,
// requests_per_day: 10000,
// burst_limit: 20,
// max_audio_size_mb: 50, // Maximum file size in MB
// max_audio_duration_seconds: 300, // Maximum audio duration
// max_concurrent_requests: 5
// },
// current_usage: {
// requests_per_second: 2,
// rejected_per_second: 0,
// total_audio_size_bytes_per_second: 1048576,
// requests_per_minute: 15,
// rejected_per_minute: 0,
// total_audio_size_bytes_per_minute: 15728640
// // ... usage for hour and day
// }
// }
// Check API health
const health = await client.rateLimit.getHealth();
console.log(health);
// { status: 'healthy', timestamp: 1738684800 }
`
The reset and timestamp values are Unix timestamps (UTC) in seconds.
| HTTP Status | Error Code | Description |
|-------------|------------|-------------|
| 400 | AUDIO_TOO_SHORT | Audio duration below minimum (4.5 seconds). Response includes min_duration_seconds and actual_duration_seconds |AUDIO_TOO_LONG
| 400 | | Audio duration above maximum (15 seconds). Response includes max_duration_seconds and actual_duration_seconds |
| 400 | Bad Request | Invalid request format or parameters |
| 401 | Unauthorized | Invalid or missing API key |
| 500 | Server Error | Internal server error |
| HTTP Status | Error Code | Description |
|-------------|------------|-------------|
| 400 | AUDIO_TOO_SHORT | Audio duration below minimum (5 seconds) |FILE_SIZE_LIMIT_EXCEEDED
| 400 | | File size exceeds rate limit policy maximum. Raised before upload attempt |FILE_TOO_LARGE
| 400 | | File exceeds maximum upload size (1 GB). Response includes max_file_size_bytes and actual_file_size_bytes |
| 400 | Bad Request | Invalid request format or parameters |
| 401 | Unauthorized | Invalid or missing API key |
| 404 | Not Found | Request ID not found |
| 500 | Server Error | Internal server error |
Asynch Status Values:
| Status | Meaning |
|--------|---------|
| initiated | Upload in progress |upload_completed
| | Upload finished, awaiting processing |processing
| | Server analyzing audio |completed
| | Results ready |failed
| | Processing failed |
| Event | Description |
|-------|-------------|
| error | Server-side error during streaming |warning
| | Non-fatal warning from server |connect_error
| | WebSocket connection failed |disconnect
| | Connection closed |
| HTTP Status | Description |
|-------------|-------------|
| 401 | Unauthorized - Invalid API key |
| 429 | Too Many Requests - Rate limit exceeded |
| 500 | Server Error |
`javascript
import {
ValenceClient,
AudioTooShortError,
FileSizeLimitExceededError
} from 'valenceai';
try {
const client = new ValenceClient({ apiKey: 'your_key' });
const result = await client.discrete.emotions('audio.wav');
} catch (error) {
if (error instanceof AudioTooShortError) {
console.error(Audio too short: ${error.actualDuration}s (min: ${error.minDuration}s));File too large: ${error.actualSizeMb.toFixed(2)} MB (max: ${error.maxSizeMb} MB)
} else if (error instanceof FileSizeLimitExceededError) {
console.error();`
} else if (error.message.includes('API key')) {
console.error('Authentication error:', error.message);
} else if (error.message.includes('File not found')) {
console.error('File error:', error.message);
} else if (error.message.includes('API error')) {
console.error('API error:', error.message);
} else {
console.error('Unexpected error:', error.message);
}
}
1. Environment Variable: VALENCE_API_KEY is now the standard (consistent naming across SDKs)ValenceClient
2. Unified Client: Single class with nested APIs
3. Streaming API: New WebSocket-based real-time emotion detection
4. Rate Limiting: New API for monitoring usage
5. Timeline Data: Asynch API now returns detailed timestamp information
6. Helper Methods: Asynch API now includes functions for baseline analysis of emotion timeline
`javascript
// Old (v0.x)
import { predictDiscreteAudioEmotion } from 'valenceai';
const result = await predictDiscreteAudioEmotion('file.wav');
// New (v1.0.0)
import { ValenceClient } from 'valenceai';
const client = new ValenceClient({ apiKey: 'your_key' });
const result = await client.discrete.emotions('file.wav');
// New streaming capability
const stream = client.streaming.connect();
stream.on('prediction', callback);
await stream.connect();
`
- predictDiscreteAudioEmotion() → client.discrete.emotions()uploadAsyncAudio()
- → client.asynch.upload()getEmotions()
- → client.asynch.emotions()ValenceClient
- All methods now require creating a instance first
See CHANGELOG.md for complete migration guide.
The SDK includes comprehensive JSDoc annotations for full TypeScript IntelliSense:
`typescript
import { ValenceClient } from 'valenceai';
const client: ValenceClient = new ValenceClient({ apiKey: 'your_key' });
// Full type inference and autocomplete
const result = await client.discrete.emotions('audio.wav');
// result.main_emotion is typed
``
- Additional Documentation: API Documentation
- Detailed Usage Examples: SDK Examples
- Contact: Valence AI Support
Private License © 2026 Valence Vibrations, Inc, a Delaware public benefit corporation.