NSFW & content moderation for images
npm install @ekipnico/image-modAI-powered NSFW and content moderation for images.
``bash`
npm install @ekipnico/image-mod
`typescript
import { createImageModMesh, ImageModerator } from '@ekipnico/image-mod';
const mesh = createImageModMesh();
const moderator = mesh.resolve(ImageModerator);
const result = await moderator.check(imageBuffer);
// { safe: true, flagged: [], scores: { adult: 0.1, violence: 0.0, ... } }
`
Models can be specified in two ways:
Use string IDs for OpenAI, Anthropic, and Google models:
`typescript
// Via environment variable (recommended for defaults)
process.env.AI_MESH_DEFAULT_MODEL = 'gpt-4o';
// Or per-request
const result = await moderator.check(imageBuffer, { model: 'gemini-flash' });
`
Built-in models: gpt-4o, gpt-4o-mini, gemini-flash, gemini-2.0-flash, claude-sonnet, etc.
Pass any Vercel AI SDK model directly for providers like Groq, Mistral, DeepSeek, etc:
`typescript
import { createGroq } from '@ai-sdk/groq';
const groq = createGroq({ apiKey: process.env.GROQ_API_KEY });
const result = await moderator.check(imageBuffer, {
model: groq('llama-3.3-70b-versatile'),
});
`
`typescript
import { createMistral } from '@ai-sdk/mistral';
const mistral = createMistral({ apiKey: process.env.MISTRAL_API_KEY });
const result = await moderator.check(imageBuffer, {
model: mistral('pixtral-large-latest'),
});
`
| Variable | Description |
|----------|-------------|
| AI_MESH_DEFAULT_MODEL | Default model when not specified (default: gpt-4o) |OPENAI_API_KEY
| | Required for OpenAI models |ANTHROPIC_API_KEY
| | Required for Anthropic models |GOOGLE_API_KEY
| | Required for Google/Gemini models |
Checks an image for moderation issues.
Parameters:
- input - ImageInput (Buffer, URL, file path, or ImageInput object)config.model
- - ModelSpec (string ID or LanguageModel)config.adult
- - Threshold for adult content (0-1, default: 1)config.violence
- - Threshold for violence (0-1, default: 1)config.racy
- - Threshold for racy content (0-1, default: 1)config.medical
- - Threshold for medical content (0-1, default: 0.5)config.spoof
- - Threshold for manipulated content (0-1, default: 0.5)
Threshold behavior:
- 0 = Skip this category0.01-1
- = Flag if score >= threshold (higher = stricter)
Returns:
`typescript`
{
safe: boolean; // true if no categories flagged
flagged: string[]; // Categories that exceeded thresholds
scores: ModerationScores // Raw scores (0-1) for each category
}
`typescript
import { Mesh } from 'mesh-ioc';
import { registerImageModServices, ImageModerator } from '@ekipnico/image-mod';
const mesh = new Mesh('MyApp');
// ... register your services ...
registerImageModServices(mesh);
const moderator = mesh.resolve(ImageModerator);
``