AsterMind Community - Complete ELM library with 21+ advanced variants, Pro features (RAG, reranking, summarization), and OmegaSynth synthetic data generation. Free and open-source. Combines all features from AsterMind-ELM, AsterMind-Pro, AsterMind-Premium
npm install @astermind/astermind-community


Complete ELM library with 21+ advanced variants, Pro features (RAG, reranking, summarization), and OmegaSynth synthetic data generation. Free and open-source.
AsterMind Community combines all features from AsterMind-ELM, AsterMind-Pro, AsterMind-Premium, and AsterMind-Synth into one unified, free, and open-source package under the MIT license.
---
AsterMind brings instant, tiny, on-device ML to the web. It lets you ship models that train in milliseconds, predict with microsecond latency, and run entirely in the browser β no GPU, no server, no tracking. With Kernel ELMs, Online ELM, DeepELM, and Web Worker offloading, you can create:
- Private, on-device classifiers (language, intent, toxicity, spam) that retrain on user feedback
- Real-time retrieval & reranking with compact embeddings (ELM, KernelELM, NystrΓΆm whitening) for search and RAG
- Interactive creative tools (music/drum generators, autocompletes) that respond instantly
- Edge analytics: regressors/classifiers from data that never leaves the page
- Deep ELM chains: stack encoders β embedders β classifiers for powerful pipelines, still tiny and transparent
Why it matters: ELMs give you closed-form training (no heavy SGD), interpretable structure, and tiny memory footprints.
AsterMind modernizes ELM with kernels, online learning, workerized training, robust preprocessing, and deep chaining β making seriously fast ML practical for every web app.
---
Major Release: All features from Elm, Pro, Premium, and Synth are now free and open-source!
- 21 Advanced ELM Variants β Previously Premium-only, now free:
- Adaptive Online ELM, Forgetting Online ELM, Hierarchical ELM
- Attention-Enhanced ELM, Variational ELM, Time-Series ELM
- Transfer Learning ELM, Graph ELM, Graph Kernel ELM
- Adaptive Kernel ELM, Sparse Kernel ELM, Ensemble Kernel ELM
- Deep Kernel ELM, Robust Kernel ELM, ELM-KELM Cascade
- String Kernel ELM, Convolutional ELM, Recurrent ELM
- Fuzzy ELM, Quantum-Inspired ELM, Tensor Kernel ELM
- Pro Features β RAG, Reranking, Summarization, Information Flow Analysis (now free!)
- OmegaSynth β Label-conditioned synthetic data generation (now free!)
- All Core Features β Kernel ELMs, Online ELM, DeepELM, Web Workers, and more
- MIT License β Fully open-source, no license required!
See Releases for full changelog.
---
1. Introduction
2. Features
3. Kernel ELMs (KELM)
4. Online ELM (OS-ELM)
5. DeepELM
6. Web Worker Adapter
7. Installation
8. Usage Examples
9. Suggested Experiments
10. Why Use AsterMind
11. Core API Documentation
12. Method Options Reference
13. ELMConfig Options
14. Prebuilt Modules
15. Text Encoding Modules
16. UI Binding Utility
17. Data Augmentation Utilities
18. IO Utilities (Experimental)
19. Embedding Store
20. Utilities: Matrix & Activations
21. Adapters & Chains
22. Workers: ELMWorker & ELMWorkerClient
23. Example Demos and Scripts
24. Experiments and Results
25. Documentation
26. Releases
27. License
---
Welcome to AsterMind, a modular, decentralized ML framework built around cooperating Extreme Learning Machines (ELMs) that self-train, self-evaluate, and self-repair β like the nervous system of a starfish.
How This ELM Library Differs from a Traditional ELM
This library preserves the core Extreme Learning Machine idea β random hidden layer, nonlinear activation, closed-form output solve β but extends it with:
- Multiple activations (ReLU, LeakyReLU, Sigmoid, Linear, GELU)
- Xavier/Uniform/He initialization
- Dropout on hidden activations
- Sample weighting
- Metrics gate (RMSE, MAE, Accuracy, F1, Cross-Entropy, RΒ²)
- JSON export/import
- Model lifecycle management
- UniversalEncoder for text (char/token)
- Data augmentation utilities
- Chaining (ELMChain) for stacked embeddings
- Weight reuse (simulated fine-tuning)
- Logging utilities
AsterMind is designed for:
* Lightweight, in-browser ML pipelines
* Transparent, interpretable predictions
* Continuous, incremental learning
* Resilient systems with no single point of failure
---
---
Supports Exact and NystrΓΆm modes with RBF/Linear/Poly/Laplacian/Custom kernels.
Includes whitened NystrΓΆm (persisted whitener for inference parity).
``ts
import { KernelELM, KernelRegistry } from '@astermind/astermind-elm';
const kelm = new KernelELM({
outputDim: Y[0].length,
kernel: { type: 'rbf', gamma: 1 / X[0].length },
mode: 'nystrom',
nystrom: { m: 256, strategy: 'kmeans++', whiten: true },
ridgeLambda: 1e-2,
});
kelm.fit(X, Y);
`
---
Stream updates via Recursive Least Squares (RLS) with optional forgetting factor. Supports He/Xavier/Uniform initializers.
`ts`
import { OnlineELM } from '@astermind/astermind-elm';
const ol = new OnlineELM({ inputDim: D, outputDim: K, hiddenUnits: 256 });
ol.init(X0, Y0);
ol.update(Xt, Yt);
ol.predictProbaFromVectors(Xq);
Notes
- forgettingFactor controls how fast older observations decay (default 1.0). ELMAdapter
- Two natural embedding modes: hidden (activations) or logits (pre-softmax). Use with (see below).
---
Stack multiple ELM layers for deep nonlinear embeddings and an optional top ELM classifier.
`ts`
import { DeepELM } from '@astermind/astermind-elm';
const deep = new DeepELM({
inputDim: D,
layers: [{ hiddenUnits: 128 }, { hiddenUnits: 64 }],
numClasses: K
});
// 1) Unsupervised layer-wise training (autoencoders Y=X)
const X_L = deep.fitAutoencoders(X);
// 2) Supervised head (ELM) on last layer features
deep.fitClassifier(X_L, Y);
// 3) Predict
const probs = deep.predictProbaFromVectors(Xq);
JSON I/O
toJSON() and fromJSON() persist the full stack (AEs + classifier).
---
Move heavy ops off the main thread. Provides ELMWorker + ELMWorkerClient for RPC-style training/prediction with progress events.
- Initialize with initELM(config) or initOnlineELM(config) train
- Train via / trainFromData / fit / update predict
- Predict via , predictFromVector, or predictLogits
- Subscribe to progress callbacks per call
See Workers for full API.
---
NPM (scoped package):
`bash`
npm install @astermind/astermind-communityor
pnpm add @astermind/astermind-communityor
yarn add @astermind/astermind-community
CDN /
`
Repository:
- GitHub: https://github.com/infiniteCrank/AsterMind-Community
- NPM: https://www.npmjs.com/package/@astermind/astermind-community
Migration from old packages:
- See Migration Guide for details
- All old packages (@astermind/astermind-elm, @astermind/astermind-pro, @astermind/astermind-premium, @astermind/astermind-synthetic-data) are deprecated@astermind/astermind-community
- Simply install and update your imports - no license required!
---
Basic ELM Classifier
`ts
import { ELM } from "@astermind/astermind-community";
const config = { categories: ['English', 'French'], hiddenUnits: 128 };
const elm = new ELM(config);
// Load or train logic here
const results = elm.predict("bonjour");
console.log(results);
`
Advanced ELM Variants (Now Free!):
`ts
import { AdaptiveOnlineELM, HierarchicalELM, TimeSeriesELM } from "@astermind/astermind-community";
// Adaptive Online ELM - dynamically adjusts hidden units
const adaptive = new AdaptiveOnlineELM({
categories: ['class1', 'class2'],
initialHiddenUnits: 128
});
// Hierarchical ELM - multi-level classification
const hierarchical = new HierarchicalELM({
hierarchy: { 'root': ['animal', 'plant'], 'animal': ['mammal', 'bird'] },
rootCategories: ['root']
});
// Time-Series ELM - specialized for sequential data
const timeSeries = new TimeSeriesELM({
categories: ['trend_up', 'trend_down', 'stable'],
sequenceLength: 10
});
`
Synthetic Data Generation (Now Free!):
`ts
import { OmegaSynth } from "@astermind/astermind-community";
const synth = new OmegaSynth({
mode: 'hybrid', // or 'elm', 'exact', 'retrieval', 'perfect'
maxLength: 32
});
await synth.train(dataset);
const generated = await synth.generate('label', 10);
`
CommonJS / Node:
`js`
const { ELM, AdaptiveOnlineELM, OmegaSynth } = require("@astermind/astermind-community");
Kernel ELM / DeepELM: see above examples.
---
* Compare retrieval performance with Sentence-BERT and TFIDF.
* Experiment with activations and token vs char encoding.
* Deploy in-browser retraining workflows.
---
Because you can build AI systems that:
* Are decentralized.
* Self-heal and retrain independently.
* Run in the browser.
* Are transparent and interpretable.
---
, trainFromData, predict, predictFromVector, getEmbedding, predictLogitsFromVectors, JSON I/O, metrics
- loadModelFromJSON, saveModelAsJSONFile
- Evaluation: RMSE, MAE, Accuracy, F1, Cross-Entropy, RΒ²
- Config highlights: ridgeLambda, weightInit (uniform | xavier | he), seed$3
- init, update, fit, predictLogitsFromVectors, predictProbaFromVectors, embeddings (hidden/logits), JSON I/O
- Config highlights: inputDim, outputDim, hiddenUnits, activation, ridgeLambda, forgettingFactor$3
- fit, predictProbaFromVectors, getEmbedding, JSON I/O
- mode: 'exact' | 'nystrom', kernels: rbf | linear | poly | laplacian | custom$3
- fitAutoencoders(X), transform(X), fitClassifier(X_L, Y), predictProbaFromVectors(X)
- toJSON(), fromJSON() for full-pipeline persistence$3
- sequential embeddings through multiple encoders$3
- vectorize, vectorizeAll$3
- find(queryVec, dataset, k, topX, metric)---
π Method Options Reference
$3
- augmentationOptions: { suffixes, prefixes, includeNoise }
- weights: sample weights$3
- X: Input matrix
- Y: Label matrix or one-hot
- options: { reuseWeights, weights } $3
- text: string
- topK: number of predictions $3
- vector: numeric
- topK: number of predictions $3
- filename: optional file name ---
βοΈ ELMConfig Options Reference
| Option | Type | Description |
| -------------------- | ---------- | ------------------------------------------------------------- |
|
categories | string[] | List of labels the model should classify. (Required) |
| hiddenUnits | number | Number of hidden layer units (default: 50). |
| maxLen | number | Max length of input sequences (default: 30). |
| activation | string | Activation function (relu, tanh, etc.). |
| encoder | any | Custom UniversalEncoder instance (optional). |
| charSet | string | Character set used for encoding. |
| useTokenizer | boolean | Use token-level encoding. |
| tokenizerDelimiter | RegExp | Tokenizer regex. |
| exportFileName | string | Filename to export JSON. |
| metrics | object | Thresholds (rmse, mae, accuracy, etc.). |
| log | object | Logging config. |
| dropout | number | Dropout rate. |
| weightInit | string | Initializer. (uniform | xavier | he) |
| ridgeLambda | number | Ridge penalty for closed-form solve. |
| seed | number | PRNG seed for reproducibility. |---
π§© Prebuilt Modules and Custom Modules
Includes: AutoComplete, EncoderELM, CharacterLangEncoderELM, FeatureCombinerELM, ConfidenceClassifierELM, IntentClassifier, LanguageClassifier, VotingClassifierELM, RefinerELM.
Each exposes
.train(), .predict(), .loadModelFromJSON(), .saveModelAsJSONFile(), .encode().Custom modules can be built on top.
---
β¨ Text Encoding Modules
Includes
TextEncoder, Tokenizer, UniversalEncoder.
Supports char-level & token-level, normalization, n-grams.---
π₯οΈ UI Binding Utility
bindAutocompleteUI(model, inputElement, outputElement, topK) helper.
Binds model predictions to live HTML input.---
β¨ Data Augmentation Utilities
Augment with prefixes, suffixes, noise.
Example:
Augment.generateVariants("hello", "abc", { suffixes:["world"], includeNoise:true }).---
β οΈ IO Utilities (Experimental)
JSON/CSV/TSV import/export, schema inference.
Experimental and may be unstable.
---
π§° Embedding Store
Lightweight vector store with cosine/dot/euclidean KNN, unit-norm storage, ring buffer capacity.
Usage
`ts
import { EmbeddingStore } from '@astermind/astermind-elm';const store = new EmbeddingStore({ capacity: 5000, normalize: true });
store.add({ id: 'doc1', vector: [/ ... /], meta: { title: 'Hello' } });
const hits = store.query({ vector: q, k: 10, metric: 'cosine' });
`---
π§ Utilities: Matrix & Activations
Matrix β internal linear algebra utilities (multiply, transpose, addRegularization, solveCholesky, etc.).
Activations β
relu, leakyrelu, sigmoid, tanh, linear, gelu, plus softmax, derivatives, and helpers (get, getDerivative, getPair).---
π Adapters & Chains
ELMAdapter wraps an
ELM or OnlineELM to behave like an encoder for ELMChain:`ts
import { ELMAdapter, wrapELM, wrapOnlineELM } from '@astermind/astermind-elm';const enc1 = wrapELM(elm); // uses elm.getEmbedding(X)
const enc2 = wrapOnlineELM(online, { mode: 'logits' }); // 'hidden' or 'logits'
const chain = new ELMChain([enc1, enc2], { normalizeFinal: true });
const Z = chain.getEmbedding(X); // stacked embeddings
`---
π§± Workers: ELMWorker & ELMWorkerClient
ELMWorker (inside a Web Worker) exposes a tolerant RPC surface:
- lifecycle:
initELM, initOnlineELM, dispose, getKind, setVerbose
- training: train, fit, update, trainFromData (all routed appropriately)
- prediction: predict, predictFromVector, predictLogits
- progress events: { type:'progress', phase, pct } during trainingELMWorkerClient (on the main thread) is a thin promise-based RPC client:
`ts
import { ELMWorkerClient } from '@astermind/astermind-elm/worker';const client = new ELMWorkerClient(new Worker(new URL('./ELMWorker.js', import.meta.url)));
await client.initELM({ categories:['A','B'], hiddenUnits:128 });
await client.elmTrain({}, (p) => console.log(p.phase, p.pct));
const preds = await client.elmPredict('bonjour', 5);
`---
π§ͺ Example Demos and Scripts
Run with
npm run dev:* (autocomplete, lang, chain, news).
Fully in-browser.---
π§ͺ Experiments and Results
Includes dropout tuning, hybrid retrieval, ensemble distillation, multi-level pipelines.
Results reported (Recall@1, Recall@5, MRR).
---
π Documentation
AsterMind ELM includes comprehensive documentation to help you get started and master the library:
$3
- Quick Start Tutorial β Complete step-by-step guide covering all major features with practical examples
- Basic ELM, Kernel ELM, Online ELM, DeepELM
- Embeddings, ELM Chains, Web Workers
- Pre-built modules, model persistence
- Advanced features and troubleshooting
- AsterMind ELM Overview β High-level overview of what AsterMind ELM is and why tiny neural networks matter
- Core capabilities (classification, regression, embeddings, online learning)
- The AsterMind ecosystem
- Technical architecture overview
$3
- Implementation Models β Guide to different ways of implementing AsterMind
- SDK/Library Implementation: Integrating AsterMind into your applications
- Standalone Applications: Using pre-built example applications
- Service Engagement: Professional services for custom implementation
- How to choose the right approach for your needs
- Technical Requirements β System requirements for different platforms
- Windows, Linux, and macOS requirements
- Browser compatibility
- Development and runtime requirements
- Troubleshooting common issues
$3
- Code Walkthrough β Detailed code walkthrough for presentations and deep dives
- Entry points and exports
- Core architecture and configuration system
- Main ELM class implementation
- Training and prediction flows
- Key code snippets with line numbers
- Data Requirements β Guide to data requirements for training models
- Minimum viable data sizes
- Recommendations for better generalization
- Data collection strategies
- ELM-specific considerations
$3
- Examples Directory β Working demo applications
- Language classification
- Autocomplete chains
- News classification
- Music genre detection
- And more...
- Node Examples β Advanced Node.js examples
- Two-stage retrieval systems
- TF-IDF integration
- DeepELM + KernelELM retrieval
- Experimental architectures
- Legal Information β Licensing, patents, and legal notices
$3
| Document | Purpose | Audience |
|----------|---------|----------|
| Quick Start Tutorial | Learn how to use all features | Beginners |
| Overview | Understand what AsterMind is | Everyone |
| Implementation Models | Choose integration approach | Decision makers, developers |
| Technical Requirements | System setup and requirements | DevOps, developers |
| Code Walkthrough | Deep dive into code structure | Developers, presenters |
| Data Requirements | Training data guidelines | ML practitioners |
---
π¦ Releases
$3
New features: Kernel ELM, NystrΓΆm whitening, OnlineELM, DeepELM, Worker adapter, EmbeddingStore 2.0, activations linear/gelu, config split.
Fixes: Xavier init, encoder guards, dropout scaling.
Breaking: Config now NumericConfig|TextConfig`.---
MIT License
All features are now free and open-source! This package combines all features from the previous AsterMind packages (ELM, Pro, Premium, Synth) into one unified community edition under the MIT license. No license tokens or subscriptions required.
---
> βAsterMind doesnβt just mimic a brainβit functions more like a starfish: fully decentralized, self-evaluating, and self-repairing.β