A open source proctoring SDK with face detection and media stream uploading
npm install openproctorHere’s a README.md file for your project, covering all the steps and guidelines for getting started with your proctoring SDK, implementing face detection, and handling contributions.
---
This repository contains a Proctoring SDK that allows:
- Capturing camera, microphone, and screen share streams.
- Real-time face detection using TensorFlow.js BlazeFace.
- Streaming and uploading media streams to a server for analysis and storage.
- Camera, Microphone, and Screen Sharing: Capture video, audio, and screen streams.
- Real-time Face Detection: Detect faces from the camera stream using BlazeFace.
- Proctoring Setup: Control when to start and stop media streams and face detection.
- Stream Uploading: Send media streams (video, audio, screen) to a server for storage and analysis.
1. Get Started
2. Using the SDK
3. Face Detection
4. Uploading Streams to Server
5. Contribution Guidelines
---
- Node.js (version 14 or later)
- npm or yarn
``bash`
git clone https://github.com/your-username/proctoring-sdk.git
cd proctoring-sdk
Install the dependencies using npm or yarn:
`bash`
npm install
or
`bash`
yarn install
`bash`
npm run dev
This will start the Next.js development server. Visit http://localhost:3000 in your browser to view the application.
---
The SDK provides functions to set up the camera, microphone, screen sharing, and face detection. Use the alias @/lib to import the SDK components in your project.
`typescript`
import { Proctor } from '@/lib';
To start capturing the camera, microphone, and screen sharing, and run face detection:
`typescript`
Proctor.setup({
onVideoFrame: (cameraStream: MediaStream) => {
// Handle the camera stream (e.g., attach to a video element)
},
videoElement: videoRef.current, // The video element for face detection
canvasElement: canvasRef.current, // The canvas element to draw face boxes
onAudioStream: (audioStream: MediaStream) => {
// Handle the audio stream
},
onScreenStream: (screenStream: MediaStream) => {
// Handle the screen sharing stream
},
onTabSwitch: () => {
// Handle tab switch detection
alert('You switched tabs! Please return to the test.');
}
});
To stop all active streams and face detection:
`typescript`
Proctor.stopCamera();
Proctor.stopMicrophone();
Proctor.stopScreenShare();
---
This SDK uses TensorFlow.js BlazeFace for real-time face detection. The face detection model is loaded automatically when the camera stream starts. The results (face bounding boxes) are drawn on a canvas overlay.
1. The BlazeFace model is loaded and used to detect faces in real-time from the camera feed.
2. Face bounding boxes are drawn on a canvas overlaid on the video stream.
Face detection is set up when the camera stream is initialized:
`typescript
import { FaceDetection } from '@/lib/faceDetection';
Proctor.setup({
onVideoFrame: (cameraStream: MediaStream) => {
// Attach the camera stream to the video element and start face detection
FaceDetection.detectFaces(videoRef.current, canvasRef.current);
},
videoElement: videoRef.current, // The video element for face detection
canvasElement: canvasRef.current // The canvas element to draw face boxes
});
`
---
The SDK includes a StreamUploader class that allows you to capture and upload the media streams (camera, microphone, and screen) to a server for analysis and storage.
`typescript
import { StreamUploader } from '@/lib/streamUploader';
// Initialize the uploader
const uploader = new StreamUploader({
videoStream: cameraStream,
audioStream: audioStream,
screenStream: screenStream,
serverUrl: 'https://your-server-url/upload' // Server endpoint to upload streams
});
// Start uploading the streams
uploader.start();
`
The following is an example of an Express.js server to handle media uploads:
`javascript
const express = require('express');
const multer = require('multer');
const path = require('path');
const fs = require('fs');
const app = express();
const port = 3000;
const storage = multer.diskStorage({
destination: (req, file, cb) => {
const uploadDir = path.join(__dirname, 'uploads');
if (!fs.existsSync(uploadDir)) {
fs.mkdirSync(uploadDir);
}
cb(null, uploadDir);
},
filename: (req, file, cb) => {
cb(null, file.originalname);
}
});
const upload = multer({ storage });
app.post('/upload', upload.single('file'), (req, res) => {
console.log(File uploaded: ${req.file.originalname});
res.status(200).send('File uploaded successfully.');
});
app.listen(port, () => {
console.log(Server listening at http://localhost:${port});`
});
---
We welcome contributions to this project. Here’s how you can contribute:
Fork the project repository to your own GitHub account by clicking the Fork button at the top right of the repository page.
Clone your fork to your local machine:
`bash`
git clone https://github.com/your-username/proctoring-sdk.git
Create a new branch for your feature or bug fix:
`bash`
git checkout -b feature/new-feature
Make your changes in the codebase, and be sure to write unit tests for any new functionality.
Ensure that all tests pass before submitting your changes:
`bash`
npm run test
Commit your changes and push the branch to your fork:
`bash`
git add .
git commit -m "Add new feature"
git push origin feature/new-feature
Submit a pull request to the main repository, and provide a detailed description of your changes.
---
This project is licensed under the GNU License 3.0.
---
Let me know if you need any further clarifications or additions to the README.md`!
👋 hello@codseg.com