Showing 1-20 of 70 packages
Toxicity model in TensorFlow.js
npm
A comprehensive JavaScript library for content moderation, including profanity filtering, sentiment analysis, and toxicity detection. Leveraging advanced algorithms and external APIs, TextModerate provides developers with tools to create safer and more po
A React toxicity recognition wrapper capable of detecting toxic content from user's input.
Get sentiment and toxicity of a text.
Typescript/Js client for toxicity analysis
🛡️ Advanced content analysis and moderation system with multi-variant optimization. Features context-aware detection, harassment prevention, and ML-powered toxicity analysis. Pre-1.0 development version.
A package to analyze text for profanity and to detect the rating of an Image using AI
A simple server for use with the Perspective API. Serves static content and provides an open endpoint to send requests for one attribute to, e.g. toxicity. This should illustrate how to send requests to the API.
A rule-based, AI-less text toxicity and content checker for chat and comment moderation.
`@mastra/evals` ships a collection of scoring utilities you can run locally or inside your own evaluation pipelines. These scorers come in two flavors:
This Package can detect how much toxicity persent in your text and return you toxicity Percentage in text, toxic words used in text & list of toxic word uses in the given text.
A TypeScript library for validating and securing LLM prompts
Spam Scanner - The Best Anti-Spam Scanning Service and Anti-Spam API
A utility to format prompts for a cleaner presentation and optimal token usage
A TypeScript implementation of decompression calculation algorithms for scuba diving, featuring Bühlmann ZH-L16C algorithm with gradient factors, gas management, and oxygen toxicity tracking.
Protect any MCP server from malicious entities and confidential PII.
A socket io driven command line chat app. Our server uses TensorFlow to moderate messages and try to stop toxicity and bad language.
The official TypeScript library for the Moderation API API
TypeScript-native guardrails engine for AI applications. Content safety, prompt injection detection, output validation, and intelligent rate limiting.