A powerful React Native Vision Camera plugin delivering high-performance Google ML Kit frame processor features—including text recognition (OCR), face detection, barcode scanning, pose detection, and more. Seamlessly bridges native ML Kit capabilities for
npm install react-native-vision-camera-mlkit
src="docs/static/img/example.gif"
align="right"
width="35%"
alt="example"
/>
[![Contributors][contributors-shield]][contributors-url]
[![Forks][forks-shield]][forks-url]
[![Stargazers][stars-shield]][stars-url]
[![Issues][issues-shield]][issues-url]
[![MIT License][license-shield]][license-url]
[![NPM Version][npm-version-shield]][npm-version-url]
A React Native Vision Camera plugin that exposes high-performance Google ML Kit frame processor features such as text recognition (OCR), face detection, barcode scanning, pose detection, and more.
> The example app is intentionally heavy and demo-focused. For integration details, follow the documentation below.
- iOS 12+ and Android SDK 21+
- react-native-vision-camera
- react-native-worklets-core
Install Vision Camera (React Native):
``sh`
npm i react-native-vision-camera
cd ios && pod install
Install Worklets Core:
`sh`
npm i react-native-worklets-coreor
yarn add react-native-worklets-core
Add the Babel plugin in babel.config.js:
`js`
module.exports = {
plugins: [['react-native-worklets-core/plugin']],
};
> For Expo, follow the Vision Camera guide: react-native-vision-camera.com/docs/guides
`sh
npm install react-native-vision-camera-mlkitor
yarn add react-native-vision-camera-mlkit
cd ios && pod install
`
By default, all ML Kit features are enabled. You can selectively include only the models you need to reduce binary size.
In your app's android/build.gradle (root project), add:
`gradle`
ext["react-native-vision-camera-mlkit"] = [
mlkit: [
textRecognition: true,
textRecognitionChinese: false,
textRecognitionDevanagari: false,
textRecognitionJapanese: false,
textRecognitionKorean: false,
faceDetection: false,
faceMeshDetection: false,
poseDetection: false,
poseDetectionAccurate: false,
selfieSegmentation: false,
subjectSegmentation: false,
documentScanner: false,
barcodeScanning: true,
imageLabeling: false,
objectDetection: false,
digitalInkRecognition: false,
]
]
In your ios/Podfile, add a configuration hash before target:
`ruby`
$VisionCameraMLKit = {
'textRecognition' => true,
'textRecognitionChinese' => false,
'textRecognitionDevanagari' => false,
'textRecognitionJapanese' => false,
'textRecognitionKorean' => false,
'faceDetection' => false,
'poseDetection' => false,
'poseDetectionAccurate' => false,
'selfieSegmentation' => false,
'barcodeScanning' => true,
'imageLabeling' => false,
'objectDetection' => false,
'digitalInkRecognition' => false,
}
Android-only keys: faceMeshDetection, subjectSegmentation, documentScanner.
`ts
import {
useFrameProcessor,
runAsync,
runAtTargetFps,
} from 'react-native-vision-camera';
import { useTextRecognition } from 'react-native-vision-camera-mlkit';
const { textRecognition } = useTextRecognition({
language: 'LATIN',
scaleFactor: 1,
invertColors: false,
});
const frameProcessor = useFrameProcessor(
(frame) => {
'worklet';
runAtTargetFps(10, () => {
'worklet';
runAsync(frame, () => {
'worklet';
const result = textRecognition(frame, {
outputOrientation: 'portrait',
});
console.log(result.text);
});
});
},
[textRecognition]
);
`
TextRecognitionOptions:
- language?: 'LATIN' | 'CHINESE' | 'DEVANAGARI' | 'JAPANESE' | 'KOREAN'scaleFactor?: number
- (0.9-1.0)invertColors?: boolean
- frameProcessInterval?: number
- (deprecated, use runAtTargetFps)
TextRecognitionArguments:
- outputOrientation?: 'portrait' | 'portrait-upside-down' | 'landscape-left' | 'landscape-right' (iOS only)
Use processImageTextRecognition to analyze a file path or URI without the camera (for example, images picked from the gallery).
`ts
import { processImageTextRecognition } from 'react-native-vision-camera-mlkit';
const result = await processImageTextRecognition(imageUri, {
language: 'LATIN',
orientation: 'portrait',
invertColors: false,
});
console.log(result.blocks);
`
TextRecognitionImageOptions:
- language?: 'LATIN' | 'CHINESE' | 'DEVANAGARI' | 'JAPANESE' | 'KOREAN'orientation?: 'portrait' | 'portrait-upside-down' | 'landscape-left' | 'landscape-right'
- invertColors?: boolean
-
The native bridge normalizes URIs (file:// is removed on iOS and added on Android if missing). Supported formats: JPEG, PNG, WebP.
The package also exposes helpers from the plugin factory:
`ts`
import {
getFeatureErrorMessage,
isFeatureAvailable,
assertFeatureAvailable,
getAvailableFeatures,
} from 'react-native-vision-camera-mlkit';
- getAvailableFeatures(): MLKitFeature[]isFeatureAvailable(feature: MLKitFeature): boolean
- assertFeatureAvailable(feature: MLKitFeature): void
- getFeatureErrorMessage(feature: MLKitFeature): string
-
Frame processors throw a setup error when the feature is not enabled in Gradle/Podfile. For static image processing, the following error strings are exported:
- IMAGE_NOT_FOUND_ERRORINVALID_URI_ERROR
- IMAGE_PROCESSING_FAILED_ERROR
- UNSUPPORTED_IMAGE_FORMAT_ERROR
-
Use the feature helpers to provide user-friendly configuration hints:
`ts
import {
assertFeatureAvailable,
MLKIT_FEATURE_KEYS,
} from 'react-native-vision-camera-mlkit';
assertFeatureAvailable(MLKIT_FEATURE_KEYS.TEXT_RECOGNITION);
`
- Follow the Vision Camera performance guide
- Prefer runAsync(...) for heavy ML work to keep the frame processor responsive.runAtTargetFps(...)
- Use to throttle processing instead of frameProcessInterval.
iOS camera sensors are fixed in landscape orientation. The frame buffer stays landscape-shaped even when the UI rotates, so ML Kit needs an explicit orientation hint to rotate text correctly. On iOS, pass outputOrientation to textRecognition(frame, { outputOrientation }) so ML Kit can map the buffer to upright text. Android handles rotation automatically.
On Apple Silicon Macs, building for the iOS Simulator (arm64) may fail after installing this package.
This is a known limitation of Google ML Kit, which does not currently ship an arm64-simulator` slice for some iOS frameworks.
The library works correctly on physical iOS devices and on the iOS Simulator when running under Rosetta.
| # | Feature | Status | Platform |
| --- | --------------------------------- | ------------------------------------------ | ------------------------------------------------- |
| 0 | Text recognition v2 | [![complete][complete]][complete] | [![android][android]][android] [![ios][ios]][ios] |
| 1 | Face detection | [![in-progress][in-progress]][in-progress] | [![android][android]][android] [![ios][ios]][ios] |
| 2 | Face mesh detection | [![in-progress][in-progress]][in-progress] | [![android][android]][android] |
| 3 | Pose detection | [![in-progress][in-progress]][in-progress] | [![android][android]][android] [![ios][ios]][ios] |
| 4 | Selfie segmentation | [![in-progress][in-progress]][in-progress] | [![android][android]][android] [![ios][ios]][ios] |
| 5 | Subject segmentation | [![in-progress][in-progress]][in-progress] | [![android][android]][android] |
| 6 | Document scanner | [![in-progress][in-progress]][in-progress] | [![android][android]][android] |
| 7 | Barcode scanning | [![in-progress][in-progress]][in-progress] | [![android][android]][android] [![ios][ios]][ios] |
| 8 | Image labeling | [![in-progress][in-progress]][in-progress] | [![android][android]][android] [![ios][ios]][ios] |
| 9 | Object detection and tracking | [![in-progress][in-progress]][in-progress] | [![android][android]][android] [![ios][ios]][ios] |
| 10 | Digital ink recognition | [![in-progress][in-progress]][in-progress] | [![android][android]][android] [![ios][ios]][ios] |
If this project helps you, please consider sponsoring its development
react-native-vision-camera-mlkit is provided as is and maintained in my free time.
If you’re integrating this library into a production app, consider funding the project.
[complete]: https://img.shields.io/badge/COMPLETE-5E5CE6
[in-progress]: https://img.shields.io/badge/IN%20PROGRESS-FFD60A
[android]: https://img.shields.io/badge/ANDROID-3DDC84
[ios]: https://img.shields.io/badge/IOS-0A84FF
[contributors-shield]: https://img.shields.io/github/contributors/pedrol2b/react-native-vision-camera-mlkit.svg?style=for-the-badge
[contributors-url]: https://github.com/pedrol2b/react-native-vision-camera-mlkit/graphs/contributors
[forks-shield]: https://img.shields.io/github/forks/pedrol2b/react-native-vision-camera-mlkit.svg?style=for-the-badge
[forks-url]: https://github.com/pedrol2b/react-native-vision-camera-mlkit/network/members
[stars-shield]: https://img.shields.io/github/stars/pedrol2b/react-native-vision-camera-mlkit.svg?style=for-the-badge
[stars-url]: https://github.com/pedrol2b/react-native-vision-camera-mlkit/stargazers
[issues-shield]: https://img.shields.io/github/issues/pedrol2b/react-native-vision-camera-mlkit.svg?style=for-the-badge
[issues-url]: https://github.com/pedrol2b/react-native-vision-camera-mlkit/issues
[license-shield]: https://img.shields.io/github/license/pedrol2b/react-native-vision-camera-mlkit.svg?style=for-the-badge
[license-url]: https://github.com/pedrol2b/react-native-vision-camera-mlkit/blob/main/LICENSE
[npm-version-shield]: https://img.shields.io/npm/v/react-native-vision-camera-mlkit.svg?style=for-the-badge
[npm-version-url]: https://www.npmjs.com/package/react-native-vision-camera-mlkit