A rule-based, AI-less text toxicity and content checker for chat and comment moderation.
npm install text-guardtext-guard is a powerful, dependency-free, and highly customizable text analysis and moderation utility written in TypeScript. It helps you quickly detect toxicity, spam, and banned content in user input (like chat messages or comments) before it reaches your server or database.
text-guard using npm or yarn:
bash
npm install text-guard
or
yarn add text-guard
`
🚀 Quick Start (Basic Usage)
The main function is checkToxicity(text, config?). It returns an object detailing the score, the decision, and what rules were matched.
$3
By default, the package uses a Threshold of 0.5 and includes the built-in bad/offensive word lists.
#### TypeScript
`ts
import { checkToxicity } from 'text-guard';
const text1 = "You are an absolute idiot and a loser.";
const text2 = "Hey, check out this great link: http://example.com/free";
const result1 = checkToxicity(text1);
const result2 = checkToxicity(text2);
console.log(result1);
/*
{
toxic: true,
score: 0.60, // Score is above the default 0.5 threshold
words: ['offensive:idiot', 'bad:loser']
}
*/
`
$3
You can override all default settings, including the required score (Threshold) and which built-in lists to use.
`ts
import { checkToxicity } from 'text-guard';
const myConfig = {
// Set a strict threshold, requiring a score of 0.8 or higher to be toxic
TOXICITY_THRESHOLD: 0.8,
// Custom weights (optional): Make spam patterns count a lot!
SPAM_PATTERN: 0.9,
// Custom Lists
customWords: ['president', 'companyname'],
bannedUrls: ['google.com', 'example.net'],
// Rule Toggles: Disable all built-in rules
useDefaultBadWords: false,
useDefaultOffensiveWords: false,
};
const inputText = "The companyname is running a big SCAM!!! check out google.com";
const result = checkToxicity(inputText, myConfig);
console.log(result);
/*
// Since the score will likely be high (custom word + spam) and is > 0.8
{
toxic: true,
score: 0.9 + [other custom weights],
words: ['Custom:companyname', 'BannedURL:google.com', 'Pattern:SPAM:caps']
}
*/
`
⚙️ Configuration Options (
ToxicityConfig)
If you pass a config object to checkToxicity, any omitted weight or rule will fall back to its default value.
$3
| Property | Type | Default | Description |
| :--- | :--- | :--- | :--- |
| TOXICITY_THRESHOLD | number | 0.5 | The minimum score (0.0 to 1.0) required for text to be marked toxic: true. |
| SPAM_PATTERN | number | 0.6 | Weight assigned to general spam patterns (URLs, excessive caps, repetition). |
| BAD_WORD | number | 0.1 | Weight for words in the default Bad Words list. |
| OFFENSIVE_WORD | number | 0.3 | Weight for words in the default Offensive Words list. |
| CUSTOM_WORD | number | 0.3 | Weight for words defined in the customWords list. |
| REPEATED_CHAR | number | 0.2 | Weight for patterns like Heeeeelllooo. |
$3
| Property | Type | Default | Description |
| :--- | :--- | :--- | :--- |
| customWords | string[] | [] | Array of your own banned terms (case-insensitive). |
| bannedUrls | string[] | [] | Array of specific domain names/URLs to block (e.g., spam.net). |
| useDefaultBadWords | boolean | true | If false, the built-in list of general bad words is ignored. |
| useDefaultOffensiveWords | boolean | true | If false, the built-in list of highly offensive words is ignored. |
| allowAllUrls | boolean | false | If true, the default spam check for URLs is skipped (but bannedUrls` are still checked). |