Capacitor plugin for comprehensive on-device speech recognition with live partial results.
npm install @capgo/capacitor-speech-recognition
Natural, low-latency speech recognition for Capacitor apps with parity across iOS and Android, streaming partial results, and permission helpers baked in.
This package starts from the excellent capacitor-community/speech-recognition plugin, but folds in the most requested pull requests from that repo (punctuation support, segmented sessions, crash fixes) and keeps them maintained under the Capgo umbrella. You get the familiar API plus:
- ✅ Merged community PRs – punctuation toggles on iOS (PR #74), segmented results & silence handling on Android (PR #104), and the recognitionRequest safety fix (PR #105) ship out-of-the-box.
- 🚀 New Capgo features – configurable silence windows, streaming segment listeners, consistent permission helpers, and a refreshed example app.
- 🛠️ Active maintenance – same conventions as all Capgo plugins (SPM, Podspec, workflows, example app) so it tracks Capacitor major versions without bit-rot.
- 📦 Drop-in migration – TypeScript definitions remain compatible with the community plugin while exposing the extra options (addPunctuation, allowForSilence, segmentResults, etc.).
The most complete doc is available here: https://capgo.app/docs/plugins/speech-recognition/
| Plugin version | Capacitor compatibility | Maintained |
| -------------- | ----------------------- | ---------- |
| v8.\.\ | v8.\.\ | ✅ |
| v7.\.\ | v7.\.\ | On demand |
| v6.\.\ | v6.\.\ | ❌ |
| v5.\.\ | v5.\.\ | ❌ |
> Note: The major version of this plugin follows the major version of Capacitor. Use the version that matches your Capacitor installation (e.g., plugin v8 for Capacitor 8). Only the latest major version is actively maintained.
``bash`
npm install @capgo/capacitor-speech-recognition
npx cap sync
`ts
import { SpeechRecognition } from '@capgo/capacitor-speech-recognition';
await SpeechRecognition.requestPermissions();
const { available } = await SpeechRecognition.available();
if (!available) {
console.warn('Speech recognition is not supported on this device.');
}
const partialListener = await SpeechRecognition.addListener('partialResults', (event) => {
console.log('Partial:', event.matches?.[0]);
});
await SpeechRecognition.start({
language: 'en-US',
maxResults: 3,
partialResults: true,
});
// Later, when you want to stop listening
await SpeechRecognition.stop();
await partialListener.remove();
`
Add the following keys to your app Info.plist:
- NSSpeechRecognitionUsageDescriptionNSMicrophoneUsageDescription
-
* available()
* start(...)
* stop()
* getSupportedLanguages()
* isListening()
* checkPermissions()
* requestPermissions()
* getPluginVersion()
* addListener('endOfSegmentedSession', ...)
* addListener('segmentResults', ...)
* addListener('partialResults', ...)
* addListener('listeningState', ...)
* removeAllListeners()
* Interfaces
* Type Aliases
`typescript`
available() => Promise
Checks whether the native speech recognition service is usable on the current device.
Returns: Promise<SpeechRecognitionAvailability>
--------------------
`typescript`
start(options?: SpeechRecognitionStartOptions | undefined) => Promise
Begins capturing audio and transcribing speech.
When partialResults is true, the returned promise resolves immediately and updates arepartialResults
streamed through the listener until {@link stop} is called.
| Param | Type |
| ------------- | --------------------------------------------------------------------------------------- |
| options | SpeechRecognitionStartOptions |
Returns: Promise<SpeechRecognitionMatches>
--------------------
`typescript`
stop() => Promise
Stops listening and tears down native resources.
--------------------
`typescript`
getSupportedLanguages() => Promise
Gets the locales supported by the underlying recognizer.
Android 13+ devices no longer expose this list; in that case languages is empty.
Returns: Promise<SpeechRecognitionLanguages>
--------------------
`typescript`
isListening() => Promise
Returns whether the plugin is actively listening for speech.
Returns: Promise<SpeechRecognitionListening>
--------------------
`typescript`
checkPermissions() => Promise
Gets the current permission state.
Returns: Promise<SpeechRecognitionPermissionStatus>
--------------------
`typescript`
requestPermissions() => Promise
Requests the microphone + speech recognition permissions.
Returns: Promise<SpeechRecognitionPermissionStatus>
--------------------
`typescript`
getPluginVersion() => Promise<{ version: string; }>
Returns the native plugin version bundled with this package.
Useful when reporting issues to confirm that native and JS versions match.
Returns: Promise<{ version: string; }>
--------------------
`typescript`
addListener(eventName: 'endOfSegmentedSession', listenerFunc: () => void) => Promise
Listen for segmented session completion events (Android only).
| Param | Type |
| ------------------ | ------------------------------------ |
| eventName | 'endOfSegmentedSession' |
| listenerFunc | () => void |
Returns: Promise<PluginListenerHandle>
--------------------
`typescript`
addListener(eventName: 'segmentResults', listenerFunc: (event: SpeechRecognitionSegmentResultEvent) => void) => Promise
Listen for segmented recognition results (Android only).
| Param | Type |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------- |
| eventName | 'segmentResults' |
| listenerFunc | (event: SpeechRecognitionSegmentResultEvent) => void |
Returns: Promise<PluginListenerHandle>
--------------------
`typescript`
addListener(eventName: 'partialResults', listenerFunc: (event: SpeechRecognitionPartialResultEvent) => void) => Promise
Listen for partial transcription updates emitted while partialResults is enabled.
| Param | Type |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------- |
| eventName | 'partialResults' |
| listenerFunc | (event: SpeechRecognitionPartialResultEvent) => void |
Returns: Promise<PluginListenerHandle>
--------------------
`typescript`
addListener(eventName: 'listeningState', listenerFunc: (event: SpeechRecognitionListeningEvent) => void) => Promise
Listen for changes to the native listening state.
| Param | Type |
| ------------------ | --------------------------------------------------------------------------------------------------------------- |
| eventName | 'listeningState' |
| listenerFunc | (event: SpeechRecognitionListeningEvent) => void |
Returns: Promise<PluginListenerHandle>
--------------------
`typescript`
removeAllListeners() => Promise
Removes every registered listener.
--------------------
#### SpeechRecognitionAvailability
| Prop | Type |
| --------------- | -------------------- |
| available | boolean |
#### SpeechRecognitionMatches
| Prop | Type |
| ------------- | --------------------- |
| matches | string[] |
#### SpeechRecognitionStartOptions
Configure how the recognizer behaves when calling {@link SpeechRecognitionPlugin.start}.
| Prop | Type | Description |
| --------------------- | -------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| language | string | Locale identifier such as en-US. When omitted the device language is used. |maxResults
| | 5number | Maximum number of final matches returned by native APIs. Defaults to . |prompt
| | popupstring | Prompt message shown inside the Android system dialog (ignored on iOS). |
| | trueboolean | When , Android shows the OS speech dialog instead of running inline recognition. Defaults to false. |partialResults
| | partialResultsboolean | Emits partial transcription updates through the listener while audio is captured. |addPunctuation
| | allowForSilenceboolean | Enables native punctuation handling where supported (iOS 16+). |
| | number | Allow a number of milliseconds of silence before splitting the recognition session into segments. Required to be greater than zero and currently supported on Android only. |
#### SpeechRecognitionLanguages
| Prop | Type |
| --------------- | --------------------- |
| languages | string[] |
#### SpeechRecognitionListening
| Prop | Type |
| --------------- | -------------------- |
| listening | boolean |
#### SpeechRecognitionPermissionStatus
Permission map returned by checkPermissions and requestPermissions.
On Android the state maps to the RECORD_AUDIO permission.
On iOS it combines speech recognition plus microphone permission.
| Prop | Type |
| ----------------------- | ----------------------------------------------------------- |
| speechRecognition | PermissionState |
#### PluginListenerHandle
| Prop | Type |
| ------------ | ----------------------------------------- |
| remove | () => Promise<void> |
#### SpeechRecognitionSegmentResultEvent
Raised whenever a segmented result is produced (Android only).
| Prop | Type |
| ------------- | --------------------- |
| matches | string[] |
#### SpeechRecognitionPartialResultEvent
Raised whenever a partial transcription is produced.
| Prop | Type |
| ------------- | --------------------- |
| matches | string[] |
#### SpeechRecognitionListeningEvent
Raised when the listening state changes.
| Prop | Type |
| ------------ | ----------------------------------- |
| status` | 'started' \| 'stopped' |
#### PermissionState
'prompt' | 'prompt-with-rationale' | 'granted' | 'denied'