Quickly integrate face, hand, and/or pose tracking to your frontend projects in a snap ✨👌
npm install handsfreeQuickly integrate face, hand, and/or pose tracking to your frontend projects in a snap ✨👌
Powered by:
/src/, the documentation for it in /docs/, and the Handsfree Browser Extension in /extension/.
/boilerplate/.
html
`
Installing from NPM
`bash
From your projects root
npm i handsfree
`
`js
// Inside your app
import Handsfree from 'handsfree'
// Let's use handtracking and enable the plugins tagged with "browser"
const handsfree = new Handsfree({showDebug: true, hands: true})
handsfree.enablePlugins('browser')
handsfree.start()
`
Hosting the models yourself
The above will load models, some over 10Mb, from the Unpkg CDN. If you'd rather host these yourself (for example, to use offline) then you can eject the models from the npm package into your project's public folder:
`bash
Move the models into your project's public directory
- change PUBLIC below to where you keep your project's assets
ON WINDOWS
xcopy /e node_modules\handsfree\build\lib PUBLIC
EVERYWHERE ELSE
cp -r node_modules/handsfree/build/lib/* PUBLIC
`
`js
import Handsfree from 'handsfree'
const handsfree = new Handsfree({
hands: true,
// Set this to your where you moved the models into
assetsPath: '/PUBLIC/assets',
})
handsfree.enablePlugins('browser')
handsfree.start()
`
Example Workflow
The following aims to give you a quick overview of how things work. The key takeaway is that everything is centered around hooks/plugins, which are basically named callbacks which are run on every frame and can be toggled on and off.
Quickstart Workflow
The following workflow demonstrates how to use all features of Handsfree.js. Check out the Guides and References to dive deeper, and feel free to post on the Google Groups or Discord if you get stuck!
`js
// Let's enable face tracking with the default Face Pointer
const handsfree = new Handsfree({weboji: true})
handsfree.enablePlugins('browser')
// Now let's start things up
handsfree.start()
// Let's create a plugin called "logger"
// - Plugins run on every frame and is how you "plug in" to the main loop
// - "this" context is the plugin itself. In this case, handsfree.plugin.logger
handsfree.use('logger', data => {
console.log(data.weboji.morphs, data.weboji.rotation, data.weboji.pointer, data, this)
})
// Let's switch to hand tracking now. To demonstrate that you can do this live,
// let's create a plugin that switches to hand tracking when both eyebrows go up
handsfree.use('handTrackingSwitcher', {weboji} => {
if (weboji.state.browsUp) {
// Disable this plugin
// Same as handsfree.plugin.handTrackingSwitcher.disable()
this.disable()
// Turn off face tracking and enable hand tracking
handsfree.update({
weboji: false,
hands: true
})
}
})
// You can enable and disable any combination of models and plugins
handsfree.update({
// Disable weboji which is currently running
weboji: false,
// Start the pose model
pose: true,
// This is also how you configure (or pre-configure) a bunch of plugins at once
plugin: {
fingerPointer: {enabled: false},
faceScroll: {
vertScroll: {
scrollSpeed: 0.01
}
}
}
})
// Disable all plugins
handsfree.disablePlugins()
// Enable only the plugins for making music (not actually implemented yet)
handsfree.enablePlugins('music')
// Overwrite our logger to display the original model APIs
handsfree.plugin.logger.onFrame = (data) => {
console.log(handsfree.model.pose?.api, handsfree.model.weboji?.api, handsfree.model.pose?.api)
}
`
Examples
Face Tracking Examples
Face Pointers

Motion Parallax Display

Puppeteering Industrial Robots

Playing desktop games with face clicks

Hand Tracking Examples
Hand Pointers

Use with Three.js

Playing desktop games with pinch clicks

Laser pointers but with your finger

---
Pose Estimation Examples
Flappy Pose - Flappy Bird but where you have to flap your arms

Local Development
If you'd like to contribute to the library or documentation then the following will get you going:
- Install NodeJS and git
- Clone this repository: git clone https://github.com/handsfreejs/handsfree
- Install dependencies by running npm i in a terminal from the project's root
- Start development on localhost:8080 by running npm start
- Hit CTRL+C from the terminal to close the server
Once you've run the above, you can just use npm start. If you pull the latest code, remember to run npm i to get any new dependencies (this shouldn't happen often).
Command line scripts
`bash
Start local development on localhost:8080
npm start
Builds the library, documentation, and extension
npm run build
Build only the library /dist/lib/
npm run build:lib
Build only the documentation at /dist/docs/
npm run build:docs
Build only the extension at /dist/extension
npm run build:extension
Publish library to NPM
npm login
npm publish
Deploy documentation to handsfree.js.org
deploy.sh
`
Dev Notes
- See vuepress-component-font-awesome for adding new icons to the documentation. Remember to run npm run fa:build when adding new font icons so that they are copied over into the docs/.vuepress/components/FA folder
- You may occasionally need to restart server when adding new files to the /docs, this is true when changing /docs/.vuepress.config.js as well
The Handsfree Browser Extension
The Browser Extension is a designed to help you browse the web handsfree through face and/or hand gestures. The goal is to develop a "Userscript Manager" like Tampermonkey, but for handsfree-ifying web pages, games, apps, WebXR and really any other type of content found the web.
How it works

- When you first install the extension, /src/background/handsfree.js checks if you've approved the webcam. If not, then it'll open the options page in src/options/stream-capture.html
- The popup panel has a "Start/Stop Webcam" button that communicates with the background script to start the webcam: /src/popup/index.html
- The background page is where the models are stored and run. This keeps everything isolated and only asks for webcam permission once (vs on every domain): /src/background/handsfree.js
- The background page also uses the "Picture in Picture" API to "pop the webcam" out of the browser. It renders the webcam feed and debug canvases into a single canvas, and uses that as the srcObject to a separate video element which is the PiP'ed
How to install
$3
Install as an unpacked chrome extension.
1. Visit chrome://extensions`
Or try it right away with the serverless boilerplates in /boilerplate/!