Various open source salient maps
npm install salient-mapsVarious open source salient maps.

Example using the Deep Gaze model.
``
const models = require('salient-maps');
const cv = require('opencv4nodejs');
const Deep = models.deep.load();
const deep = new Deep({ width: 200, height: 200 });
const salientMap = deep.computeSaliency(cv.imread('myimage.jpg'));
`
| Option | Type | Default | Info |
| --- | --- | --- | --- |
| width | number | 200 | Width of saliency map. It's not recommended to go above 300 or below 100. |number
| height | | 200 | Height of saliency map. It's not recommended to go above 300 or below 100. |
While it's entirely up to you how use these maps, the original intent of this project was to
be paired with the salient-autofocus project
for providing fast image auto-focus capabilities.






| ID | Description | License | Usage |
| --- | --- | --- | --- |
| deep | MIT | Deep Gaze port of FASA (Fast, Accurate, and Size-Aware Salient Object Detection) algorithm | Recommended for most static usage where high accuracy is important, and near-realtime is sufficient performance (tunable by reducing map size). May not be ideal for video unless you drop map size to 150^2 or lower. |
| deep-rgb | MIT | A varient of Deep Gaze port but leveraging the RGB colour space instead of LAB. | Not recommended. Useful for comparison. Can perform better. |
| spectral | BSD | A port of the Spectral Residual model from OpenCV Contributions. | Amazing performance, great for video, but at the cost of quality/accuracy. |
| fine | BSD | A port of the Fine Grained model from OpenCV Contributions. | Interesting for testing but useless for realtime applications. |
Typical local setup.
``
git clone git@github.com:asilvas/salient-maps.git
cd salient-maps
npm i
By default testing looks at trainer/image-source, so you can put any images you like there.
Or follow the below instructions to import a known dataset.
1. Download and extract CAT2000
2. Run node trainer/scripts/import-CAT2000.js {path-to-CAT2000}
The benefit of using the above script is it'll seperate the truth maps into trainer/image-truth,
which are optional.
You can run visual previews of the available saliency maps against the dataset via:
``
npm run preview
Compare performance data between models:
``
npm run benchmark
Also available is the ability to export the salient map data to trainer/image-saliency folder, broken
down by the saliency model. This permits review of maps from disk, in addition to being in a convenient
format for submission to the mit saliency benchmark for
quality analysis against other models.
```
npm run export
While this project falls under an MIT license, each of the models are subject to their own license.
See Models for details.