Expo native module for face detection, liveness detection, and face recognition using MTCNN and MobileFaceNet
npm install expo-face-detectionExpo native module for face detection, liveness detection, and face recognition on Android. Uses MTCNN for face detection and MobileFaceNet for face embeddings.
- Face Detection - Detect multiple faces with bounding boxes and landmarks using MTCNN
- Liveness Detection - Anti-spoofing to detect fake/printed faces
- Face Registration - Extract 192-dimensional face embeddings for storage
- Face Matching - Compare faces against registered embeddings
- Native Camera View - Real-time face processing without JS bridge overhead
- Expo SDK 54+
- Android only (iOS not supported)
- Managed workflow with custom dev client
``bash`
npm install expo-face-detection
Add to your app.json:
`json`
{
"expo": {
"plugins": ["expo-face-detection"]
}
}
Build your custom dev client:
`bash`
npx expo prebuild
npx expo run:android
#### detectFaces(imageBase64, cropFaces?)
Detect all faces in an image.
`typescript
import * as FaceDetection from 'expo-face-detection';
const result = await FaceDetection.detectFaces(imageBase64, false);
// result: {
// faces: [{ box, landmarks, confidence }],
// faceCount: number,
// hasFaces: boolean,
// processingTimeMs: number,
// frameWidth: number,
// frameHeight: number
// }
`
#### detectLargestFace(imageBase64)
Detect only the largest face in an image.
`typescript`
const face = await FaceDetection.detectLargestFace(imageBase64);
// face: { box, landmarks, confidence } | null
#### checkLiveness(imageBase64)
Check if the detected face is from a live person (anti-spoofing).
`typescript`
const result = await FaceDetection.checkLiveness(imageBase64);
// result: {
// faceDetected: boolean,
// isLive: boolean,
// livenessScore: number, // Lower is more likely live
// sharpness: number, // Image sharpness score
// isSharp: boolean,
// faceBox: { left, top, right, bottom } | null,
// confidence: number,
// processingTimeMs: number,
// errorMessage: string | null
// }
Important: This module does NOT store embeddings. Your app is responsible for storing embeddings (e.g., on your server, in a database, etc.).
#### extractEmbedding(imageBase64)
Extract a 192-dimensional face embedding from a single image.
`typescript`
const result = await FaceDetection.extractEmbedding(imageBase64);
if (result.success) {
// Store embedding on your server
await api.saveUserEmbedding(userId, result.embedding);
}
// result: {
// success: boolean,
// embedding: number[] | null, // 192-dimensional array
// faceBox: { left, top, right, bottom } | null,
// processingTimeMs: number,
// errorMessage: string | null
// }
#### registerFace(frontBase64, leftBase64, rightBase64)
Register a face using 3 photos for better accuracy. Returns an averaged embedding.
`typescript`
const result = await FaceDetection.registerFace(
frontPhotoBase64,
leftPhotoBase64,
rightPhotoBase64
);
if (result.success) {
// Store the averaged embedding
await api.registerUser(userId, result.embedding);
}
#### setTargetEmbedding(embedding)
Set the target embedding for face matching.
`typescript`
// Fetch embedding from your server
const userEmbedding = await api.getUserEmbedding(userId);
FaceDetection.setTargetEmbedding(userEmbedding);
#### hasTarget()
Check if a target embedding is set.
`typescript`
const hasTarget = FaceDetection.hasTarget(); // boolean
#### clearTarget()
Clear the current target embedding.
`typescript`
FaceDetection.clearTarget();
#### processFrame(imageBase64)
Match a face against the target embedding.
`typescript`
const result = await FaceDetection.processFrame(imageBase64);
// result: {
// faceDetected: boolean,
// isMatch: boolean,
// confidence: number, // 0-1, higher is better
// distance: number, // L2 distance, lower is better
// faceBox: { left, top, right, bottom } | null,
// processingTimeMs: number,
// errorMessage: string | null
// }
`typescript
// Face detection
FaceDetection.setMinFaceRatio(0.2); // 0.05-0.5, default: 0.2
FaceDetection.setDetectionConfidenceThreshold(0.6); // 0-1, default: 0.6
// Liveness detection
FaceDetection.setLivenessThreshold(0.2); // default: 0.2
FaceDetection.setSharpnessThreshold(50); // default: 50
// Face matching
FaceDetection.setMatchThreshold(1.1); // L2 distance, default: 1.1
`
For real-time face processing, use the native camera view. Frames are processed entirely in native code without crossing the JS bridge.
The camera view supports two modes:
- matching (default) - Live face verification against a target embedding
- enrollment - Capture 3 photos (front, left, right) to create a face embedding
#### Matching Mode
`tsx
import { FaceDetectionCameraView } from 'expo-face-detection';
mode="matching"
enableMatching={true}
enableLiveness={false}
targetEmbedding={userEmbedding}
matchThreshold={1.1}
cameraFacing="front"
onMatchResult={({ nativeEvent }) => {
if (nativeEvent.isMatch) {
console.log(Match! Confidence: ${nativeEvent.confidence});`
}
}}
onFaceDetected={({ nativeEvent }) => {
console.log('Face detected:', nativeEvent.faceBox);
}}
onError={({ nativeEvent }) => {
console.error('Error:', nativeEvent.error);
}}
/>
#### Enrollment Mode
Native camera enrollment uses the same Camera2 pipeline for both enrollment and live matching, ensuring consistent embeddings. This is recommended over using expo-image-picker for enrollment.
`tsx
import React, { useState } from 'react';
import { View, Button, Text } from 'react-native';
import { FaceDetectionCameraView } from 'expo-face-detection';
function EnrollmentScreen({ onComplete }) {
const [capturePhoto, setCapturePhoto] = useState(false);
const [instruction, setInstruction] = useState('');
const [photosRemaining, setPhotosRemaining] = useState(3);
const handleEnrollmentStatus = ({ nativeEvent }) => {
// Called continuously with current status
setInstruction(nativeEvent.instruction);
setPhotosRemaining(nativeEvent.photosRemaining);
// nativeEvent: {
// currentPhotoIndex: 0, // 0, 1, 2
// photoLabel: "front", // "front", "left", "right"
// instruction: "Look straight at camera",
// photosRemaining: 3, // 3, 2, 1, 0
// readyToCapture: true, // true if face detected
// faceDetected: true,
// isLive: true,
// livenessScore: 0.1,
// faceBox: { left, top, right, bottom }
// }
};
const handleEnrollmentCapture = ({ nativeEvent }) => {
// Called when a photo is captured
console.log(Captured ${nativeEvent.photoLabel} (${nativeEvent.photoIndex + 1}/3));
setCapturePhoto(false); // Reset capture trigger
// nativeEvent: {
// photoIndex: 0, // 0, 1, 2
// photoLabel: "front", // "front", "left", "right"
// totalPhotos: 3,
// success: true,
// faceDetected: true,
// isLive: true,
// livenessScore: 0.1,
// faceBox: { left, top, right, bottom }
// }
};
const handleEnrollmentComplete = ({ nativeEvent }) => {
// Called after all 3 photos are captured
if (nativeEvent.success) {
// Save the embedding to your server
onComplete(nativeEvent.embedding);
} else {
console.error('Enrollment failed:', nativeEvent.errorMessage);
}
// nativeEvent: {
// success: true,
// embedding: number[], // 192-dimensional averaged embedding
// photoCount: 3,
// processingTimeMs: 250,
// errorMessage: null
// }
};
return (
mode="enrollment"
capturePhoto={capturePhoto}
cameraFacing="front"
onEnrollmentStatus={handleEnrollmentStatus}
onEnrollmentCapture={handleEnrollmentCapture}
onEnrollmentComplete={handleEnrollmentComplete}
onError={({ nativeEvent }) => {
console.error('Error:', nativeEvent.error);
}}
/>
title="Capture"
onPress={() => setCapturePhoto(true)}
/>
);
}
`
#### Resetting Enrollment
To restart the enrollment process (e.g., if the user wants to re-capture photos):
`tsx
const [resetEnrollment, setResetEnrollment] = useState(false);
// Trigger reset
setResetEnrollment(true);
// Remember to set it back to false after triggering
setTimeout(() => setResetEnrollment(false), 100);
resetEnrollment={resetEnrollment}
// ... other props
/>
`
#### Props
| Prop | Type | Default | Description |
|------|------|---------|-------------|
| mode | 'matching' \| 'enrollment' | 'matching' | Camera mode |enableMatching
| | boolean | false | Enable face matching (matching mode) |enableLiveness
| | boolean | false | Enable liveness detection |targetEmbedding
| | number[] | - | 192-d embedding for matching |matchThreshold
| | number | 1.1 | L2 distance threshold |cameraFacing
| | 'front' \| 'back' | 'front' | Camera to use |capturePhoto
| | boolean | false | Trigger photo capture (enrollment mode) |resetEnrollment
| | boolean | false | Reset enrollment to start over |onMatchResult
| | function | - | Called with match results (matching mode) |onFaceDetected
| | function | - | Called when face detected |onEnrollmentStatus
| | function | - | Called with enrollment status updates |onEnrollmentCapture
| | function | - | Called when enrollment photo captured |onEnrollmentComplete
| | function | - | Called when all 3 photos captured |onError
| | function | - | Called on errors |
#### Enrollment Events
onEnrollmentStatus - Called continuously while in enrollment mode
`typescript`
interface EnrollmentStatusEvent {
currentPhotoIndex: number; // 0, 1, 2
photoLabel: string; // "front", "left", "right"
instruction: string; // User instruction text
photosRemaining: number; // 3, 2, 1, 0
readyToCapture: boolean; // true if conditions met
faceDetected: boolean;
isLive?: boolean;
livenessScore?: number;
faceBox?: { left, top, right, bottom } | null;
}
onEnrollmentCapture - Called after each photo capture
`typescript`
interface EnrollmentCaptureEvent {
photoIndex: number; // 0, 1, 2
photoLabel: string; // "front", "left", "right"
totalPhotos: number; // 3
success: boolean;
faceDetected: boolean;
isLive?: boolean;
livenessScore?: number;
faceBox?: { left, top, right, bottom } | null;
errorMessage?: string;
}
onEnrollmentComplete - Called when all 3 photos are captured
`typescript`
interface EnrollmentCompleteEvent {
success: boolean;
embedding?: number[]; // 192-d averaged & normalized embedding
photoCount: number; // Number of photos used
processingTimeMs: number;
errorMessage?: string;
}
#### Why Use Native Camera Enrollment?
Using native camera enrollment instead of expo-image-picker provides:
1. Same camera pipeline - Both enrollment and matching use the identical Camera2 API, ensuring consistent image processing
2. Better embedding consistency - No differences in color correction, compression, or preprocessing between enrollment and verification
3. Guided capture - Real-time feedback shows user instructions and face detection status
4. Liveness during enrollment - Optional anti-spoofing checks during photo capture
5. Higher match accuracy - Embeddings extracted from the same pipeline produce more reliable matches
`tsx
import React, { useState } from 'react';
import { View, Button, Text, Alert, StyleSheet } from 'react-native';
import { FaceDetectionCameraView } from 'expo-face-detection';
type Screen = 'home' | 'enroll' | 'verify';
export default function App() {
const [screen, setScreen] = useState
const [savedEmbedding, setSavedEmbedding] = useState
// Enrollment state
const [capturePhoto, setCapturePhoto] = useState(false);
const [instruction, setInstruction] = useState('');
const [photosRemaining, setPhotosRemaining] = useState(3);
// Verification state
const [isVerifying, setIsVerifying] = useState(false);
// ===== ENROLLMENT HANDLERS =====
const handleEnrollmentStatus = ({ nativeEvent }) => {
setInstruction(nativeEvent.instruction);
setPhotosRemaining(nativeEvent.photosRemaining);
};
const handleEnrollmentCapture = ({ nativeEvent }) => {
setCapturePhoto(false);
Alert.alert('Photo Captured', ${nativeEvent.photoLabel} (${nativeEvent.photoIndex + 1}/3));
};
const handleEnrollmentComplete = ({ nativeEvent }) => {
if (nativeEvent.success) {
// In a real app, save this to your server
setSavedEmbedding(nativeEvent.embedding);
Alert.alert('Enrollment Complete', 'Face registered successfully!');
setScreen('home');
} else {
Alert.alert('Error', nativeEvent.errorMessage);
}
};
// ===== VERIFICATION HANDLER =====
const handleMatchResult = ({ nativeEvent }) => {
if (nativeEvent.isMatch && nativeEvent.confidence > 0.7) {
setIsVerifying(false);
Alert.alert('Verified!', Confidence: ${(nativeEvent.confidence * 100).toFixed(1)}%);
}
};
// ===== SCREENS =====
if (screen === 'enroll') {
return (
mode="enrollment"
capturePhoto={capturePhoto}
cameraFacing="front"
onEnrollmentStatus={handleEnrollmentStatus}
onEnrollmentCapture={handleEnrollmentCapture}
onEnrollmentComplete={handleEnrollmentComplete}
onError={({ nativeEvent }) => Alert.alert('Error', nativeEvent.error)}
/>
);
}
if (screen === 'verify') {
return (
mode="matching"
enableMatching={isVerifying}
targetEmbedding={savedEmbedding!}
matchThreshold={1.1}
cameraFacing="front"
onMatchResult={handleMatchResult}
onError={({ nativeEvent }) => Alert.alert('Error', nativeEvent.error)}
/>
onPress={() => setIsVerifying(!isVerifying)}
/>
);
}
// HOME SCREEN
return (
onPress={() => setScreen('verify')}
disabled={!savedEmbedding}
/>
{!savedEmbedding &&
);
}
const styles = StyleSheet.create({
container: { flex: 1 },
camera: { flex: 1 },
controls: { padding: 20, gap: 10 },
homeContainer: { flex: 1, justifyContent: 'center', alignItems: 'center', gap: 20 },
title: { fontSize: 24, fontWeight: 'bold' },
instruction: { fontSize: 16, fontWeight: '500' },
});
`
| Model | File | Input Size | Purpose |
|-------|------|------------|---------|
| P-Net | pnet.tflite | 12x12 | First stage face detection |rnet.tflite
| R-Net | | 24x24 | Second stage refinement |onet.tflite
| O-Net | | 48x48 | Final stage + landmarks |MobileFaceNet.tflite
| MobileFaceNet | | 112x112 | 192-d face embedding |FaceAntiSpoofing.tflite
| FaceAntiSpoofing | | 256x256 | Liveness detection |
1. P-Net (Proposal Network): Generates candidate face regions at multiple scales
2. R-Net (Refine Network): Filters candidates and refines bounding boxes
3. O-Net (Output Network): Final refinement + 5-point facial landmarks
- Model: MobileFaceNet
- Output: 192-dimensional L2-normalized vector
- Comparison: L2 (Euclidean) distance
- Threshold: ~1.1 for same person (lower = stricter)
- Model: FaceAntiSpoofing (tree-based classifier)
- Sharpness: Laplacian variance filter
- Score: Lower values indicate live face
- Threshold: ~0.2 (values below = live)
- Face detection: ~50-100ms per frame
- Embedding extraction: ~30-50ms
- Liveness check: ~40-60ms
- Matching: ~5-10ms
Performance varies based on device, image size, and number of faces.
``
┌─────────────────────────────────────────────────────────────────┐
│ YOUR EXPO APP │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Registration (Native Camera): │
│ ┌───────────────────────────┐ ┌──────────────────────────┐ │
│ │ FaceDetectionCameraView │ │ onEnrollmentComplete │ │
│ │ mode="enrollment" │───►│ embedding (192-d) │ │
│ │ (3 photos: front/left/ │ │ (averaged & normalized) │ │
│ │ right captured natively)│ └───────────┬──────────────┘ │
│ └───────────────────────────┘ │ │
│ Store on your server │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Your Server │ │
│ │ / Database │ │
│ └────────┬────────┘ │
│ │ │
│ Verification (Native Camera): Fetch embedding │
│ ▼ │
│ ┌───────────────────────────┐ ┌──────────────────────────┐ │
│ │ FaceDetectionCameraView │◄───│ targetEmbedding prop │ │
│ │ mode="matching" │ └──────────────────────────┘ │
│ │ enableMatching={true} │ │
│ └───────────┬───────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ onMatchResult │ │
│ │ isMatch: true │ │
│ │ confidence: 0.9 │ │
│ └─────────────────┘ │
│ │
│ ✓ Same Camera2 pipeline for enrollment & matching │
│ ✓ Consistent image processing = better accuracy │
└────────────────────────────────────────────────────────────────┘
``
┌─────────────────────────────────────────────────────────────────┐
│ YOUR EXPO APP │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Registration (Using expo-image-picker or similar): │
│ ┌──────────┐ ┌─────────────────┐ ┌──────────────────┐ │
│ │ 3 Photos │───►│ registerFace() │───►│ embedding (192-d)│ │
│ └──────────┘ └─────────────────┘ └────────┬─────────┘ │
│ │ │
│ Store on your server │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Your Server │ │
│ │ / Database │ │
│ └────────┬────────┘ │
│ │ │
│ Verification: Fetch embedding │
│ ▼ │
│ ┌──────────┐ ┌───────────────────┐ ┌──────────────────┐ │
│ │ Camera │───►│ setTargetEmbedding│◄─┤ embedding (192-d)│ │
│ └──────────┘ └─────────┬─────────┘ └──────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ processFrame() │ │
│ │ or CameraView │ │
│ └────────┬────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ isMatch: true │ │
│ │ confidence: 0.9 │ │
│ └─────────────────┘ │
│ │
│ ⚠ Different camera pipelines may affect match accuracy │
└─────────────────────────────────────────────────────────────────┘
The config plugin automatically adds:
`xml`
You still need to request runtime permission in your app:
`typescript
import { Camera } from 'expo-camera';
const { status } = await Camera.requestCameraPermissionsAsync();
`
)
- Face should be clearly visible and not occluded$3
- Hold camera steady
- Ensure adequate lighting
- Adjust setSharpnessThreshold` if neededMIT