React Native Vision Camera plugin for on-device text recognition (OCR) and translation using ML Kit. Maintained fork of react-native-vision-camera-text-recognition
npm install react-native-vision-camera-ocr-plus

A React Native Vision Camera frame processor for on-device text recognition (OCR) and translation using ML Kit.
โจ Actively maintained fork of react-native-vision-camera-text-recognition, with modern improvements, bug fixes, and support for the latest Vision Camera and React Native versions.
---
The original packages are no longer actively maintained.
This fork provides:
- โ
Ongoing maintenance and compatibility with React Native 0.76+ and Vision Camera v4+
- ๐ง Translation support (not just OCR) powered by ML Kit
- ๐ Improved stability and error handling
- ๐ Faster processing and frame optimization
- ๐ฆ TypeScript definitions included
- ๐งฉ Consistent API that works seamlessly with modern React Native projects
---
- ๐งฉ Simple drop-in API
- โก Fast, accurate on-device OCR
- ๐ฑ Works on Android and iOS
- ๐ Built-in translation via ML Kit
- ๐ธ Recognize text from live camera or static photos
- ๐ช Written in Kotlin and Swift
- ๐ง Compatible with react-native-vision-camera and react-native-worklets-core
- ๐ฅ Compatible with Firebase
---
> Peer dependencies:
> You must have react-native-vision-camera and react-native-worklets-core installed.
``bash`
npm install react-native-vision-camera-ocr-plusor
yarn add react-native-vision-camera-ocr-plus
On Apple Silicon Macs, building for the iOS Simulator (arm64) may fail after installing this package.
This is a known limitation of Google ML Kit, which does not currently ship an arm64-simulator slice for some iOS frameworks.
The library works correctly on physical iOS devices and on the iOS Simulator when running under Rosetta.
๐ Full context and discussion
---
| Previous Package | Replacement | Notes |
|------------------|-------------|-------|
| react-native-vision-camera-text-recognition | โ
react-native-vision-camera-ocr-plus | Drop-in replacement with fixes and updates |vision-camera-ocr
| | โ
react-native-vision-camera-ocr-plus | Actively maintained alternative |
---
๐ See the example app for a working demo.
`jsx
import React, { useState } from 'react';
import { StyleSheet } from 'react-native';
import { useCameraDevice } from 'react-native-vision-camera';
import { Camera } from 'react-native-vision-camera-ocr-plus';
export default function App() {
const [data, setData] = useState(null);
const device = useCameraDevice('back');
return (
<>
{!!device && (
device={device}
isActive
mode="recognize"
options={{ language: 'latin' }}
callback={(result) => setData(result)}
/>
)}
>
);
}
`
---
`jsx
import React, { useState } from 'react';
import { StyleSheet } from 'react-native';
import { useCameraDevice } from 'react-native-vision-camera';
import { Camera } from 'react-native-vision-camera-ocr-plus';
export default function App() {
const [data, setData] = useState(null);
const device = useCameraDevice('back');
return (
<>
{!!device && (
device={device}
isActive
mode="translate"
options={{ from: 'en', to: 'de' }}
callback={(result) => setData(result)}
/>
)}
>
);
}
`
---
`jsx
import React from 'react';
import { StyleSheet } from 'react-native';
import { Camera, useCameraDevice, useFrameProcessor } from 'react-native-vision-camera';
import { useTextRecognition } from 'react-native-vision-camera-ocr-plus';
export default function App() {
const device = useCameraDevice('back');
const { scanText } = useTextRecognition({ language: 'latin' });
const frameProcessor = useFrameProcessor((frame) => {
'worklet';
const data = scanText(frame);
console.log('Detected text:', data);
}, []);
return (
<>
{!!device && (
device={device}
isActive
frameProcessor={frameProcessor}
mode="recognize"
/>
)}
>
);
}
`
---
| Option | Type | Values | Default | Description |
|:-------|:-----|:--------|:---------|:------------|
| language | string | latin, chinese, devanagari, japanese, korean | latin | Text recognition language |mode
| | string | recognize, translate | recognize | Processing mode |from
| , to | string | See Supported Languages | en, de | Translation languages |scanRegion
| | object | { left, top, width, height } | undefined | Define a specific region to scan (values are string percentage proportions 0-100) |frameSkipThreshold
| | number | Any positive integer | 10 | Skip frames for better performance (higher = faster) |useLightweightMode
| | boolean | true, false | false | (Android Only) Use lightweight processing for better performance |
---
You can specify a specific region of the camera frame to scan for text. This is useful for improving performance, focusing on specific areas, or reducing false positives from background text.
Important: All scanRegion values are percentage proportions from 0 to 100
`jsx
import React from 'react';
import { StyleSheet } from 'react-native';
import { Camera, useCameraDevice, useFrameProcessor } from 'react-native-vision-camera';
import { useTextRecognition } from 'react-native-vision-camera-ocr-plus';
export default function App() {
const device = useCameraDevice('back');
const { scanText } = useTextRecognition({
language: 'latin',
scanRegion: {
left: '5%', // Start 5% from the left edge
top: '25%', // Start 25% from the top edge
width: '80%', // Span 80% of frame width
height: '40%' // Span 40% of frame height
}
});
const frameProcessor = useFrameProcessor((frame) => {
'worklet';
const data = scanText(frame);
console.log('Detected text in region:', data);
}, []);
return (
<>
{!!device && (
device={device}
isActive
frameProcessor={frameProcessor}
/>
)}
>
);
}
`
For better performance on Android devices, especially mid-range phones, you can adjust these options:
`jsx
// Higher performance (recommended for real-time scanning)
const { scanText } = useTextRecognition({
language: 'latin',
frameSkipThreshold: 10, // Process every 10th frame
useLightweightMode: true // Skip detailed corner points and element processing
});
// Balanced performance/accuracy
const { scanText } = useTextRecognition({
language: 'latin',
frameSkipThreshold: 3, // Process every 3rd frame
useLightweightMode: true
});
// Maximum accuracy (slower)
const { scanText } = useTextRecognition({
language: 'latin',
frameSkipThreshold: 1, // Process every frame
useLightweightMode: false // Full detailed data
});
`
You can also improve the performance by using runAtTargetFps in your frame processor:`jsx`
const frameProcessor = useFrameProcessor(
(frame) => {
'worklet';
runAtTargetFps(2, () => {
const data = scanText(frame);
});
},
[scanText],
);
Performance may also be better in production builds than in dev.
= better performance, less CPU usage = faster processing, reduced memory usage---
`js
import { PhotoRecognizer } from 'react-native-vision-camera-ocr-plus';
const result = await PhotoRecognizer({
uri: asset.uri,
orientation: 'portrait',
});
console.log(result);
`
> โ ๏ธ Note (iOS only):
> The orientation option is available only on iOS and is recommended when using photos captured via the camera.
| Property | Type | Values | Required | Default | Platform |
|:----------|:------|:--------|:----------|:----------|:-----------|
| uri | string | โ | โ
Yes | โ | Android, iOS |orientation
| | string | portrait, portraitUpsideDown, landscapeLeft, landscapeRight | โ No | portrait | iOS only |
---
`js
import { RemoveLanguageModel } from 'react-native-vision-camera-ocr-plus';
await RemoveLanguageModel('en');
`
---
| Language | Code | Flag |
|:----------|:------|:------|
| Afrikaans | af | ๐ฟ๐ฆ |ar
| Arabic | | ๐ธ๐ฆ |bn
| Bengali | | ๐ง๐ฉ |zh
| Chinese | | ๐จ๐ณ |en
| English | | ๐บ๐ธ๐ฌ๐ง |fr
| French | | ๐ซ๐ท |de
| German | | ๐ฉ๐ช |hi
| Hindi | | ๐ฎ๐ณ |ja
| Japanese | | ๐ฏ๐ต |ko
| Korean | | ๐ฐ๐ท |pt
| Portuguese | | ๐ต๐น |ru
| Russian | | ๐ท๐บ |es` | ๐ช๐ธ |
| Spanish |
| ...and many more. |
---
Contributions, feature requests, and bug reports are always welcome!
Please open an issue or pull request.
---
If this library helps you build awesome apps, consider supporting future maintenance and development ๐
Your support helps keep the package updated and open source โค๏ธ
---
MIT ยฉ Jamena McInteer