A React Native library for extracting audio waveform data from audio and video files. Supports iOS and Android platforms with streaming processing for large files.
npm install @lvyanxiang/react-native-audio-waveformA React Native library for extracting audio waveform data from audio and video files. Supports iOS and Android platforms with a functional programming approach.
- 🎵 Extract waveform data from audio and video files
- 📊 Generate visualization-ready data points
- 🚀 High-performance native implementation
- 📱 iOS and Android support
- âš¡ Functional programming API
- 🎯 TypeScript support
- 🔧 Configurable analysis parameters
- 🌊 NEW: Streaming processing for large files (20MB+)
- 💾 NEW: Memory-efficient chunk processing
- 🔄 NEW: Automatic file size detection and mode switching
``bash`
npm install react-native-audio-waveformor
yarn add react-native-audio-waveform
Run cd ios && pod install to install iOS dependencies.
No additional setup required for Android.
`typescript
import { extractWaveform, extractPreview } from 'react-native-audio-waveform';
// Extract full waveform analysis
const analysis = await extractWaveform({
fileUri: 'file:///path/to/audio.mp3',
segmentDurationMs: 100, // 100ms segments
});
console.log(analysis.dataPoints); // Array of waveform data points
`
`typescript`
// Generate a quick preview with 100 points
const preview = await extractPreview({
fileUri: 'file:///path/to/audio.mp3',
numberOfPoints: 100,
startTimeMs: 0,
endTimeMs: 30000, // First 30 seconds
});
`typescript`
const analysis = await extractWaveform({
fileUri: 'file:///path/to/audio.mp3',
segmentDurationMs: 50,
decodingOptions: {
targetSampleRate: 44100,
targetChannels: 1,
targetBitDepth: 16,
normalizeAudio: true,
},
features: {
energy: true,
rms: true,
zcr: true,
spectralCentroid: true,
},
});
Extracts detailed audio analysis from the specified audio file.
#### Parameters
- fileUri (string): Path to the audio filesegmentDurationMs
- (number, optional): Duration of each segment in milliseconds (default: 100)startTimeMs
- (number, optional): Start time in millisecondsendTimeMs
- (number, optional): End time in millisecondsdecodingOptions
- (object, optional): Audio decoding configurationfeatures
- (object, optional): Additional features to extract
#### Returns
Promise resolving to AudioAnalysis object containing:dataPoints
- : Array of waveform data pointsdurationMs
- : Total duration in millisecondssampleRate
- : Audio sample ratenumberOfChannels
- : Number of audio channelsamplitudeRange
- : Min/max amplitude values
Generates a simplified preview of the audio waveform for quick visualization.
#### Parameters
- fileUri (string): Path to the audio filenumberOfPoints
- (number, optional): Number of data points to generate (default: 100)startTimeMs
- (number, optional): Start time in milliseconds (default: 0)endTimeMs
- (number, optional): End time in milliseconds (default: 30000)
Each data point in the waveform contains:
`typescript`
interface DataPoint {
id: number;
amplitude: number; // Peak amplitude for the segment
rms: number; // Root mean square value
dB: number; // dBFS value
silent: boolean; // Whether the segment is silent
startTime?: number; // Start time in milliseconds
endTime?: number; // End time in milliseconds
features?: AudioFeatures; // Additional audio features
}
Check out the example app in the example/` directory for a complete implementation.
See the contributing guide to learn how to contribute to the repository and the development workflow.
MIT