A lightweight JavaScript neural network library for learning AI concepts and rapid Frontend experimentation. PyTorch-inspired, zero dependencies, perfect for educational use.
dependency-free JavaScript neural network library designed for education, experimentation, and small-scale models.
clarity, numerical correctness, and accessibility over performance or large-scale production use.
1.8.0, we Introduce the SoftmaxCrossEntropyLoss, and BCEWithLogitsLoss
Mini-JSTorch is NOT a replacement for PyTorch, TensorFlow, or TensorFlow.js.
It is intentionally scoped to remain small, readable, and easy to debug.
type: module)
[batch][features])
with a global JST object
html
Status: Initializing...
`
---
Core Features
Layers
- Linear
- Flatten
- Conv2D (experimental)
Activations
- ReLU
- Sigmoid
- Tanh
- LeakyReLU
- GELU
- Mish
- SiLU
- ELU
Loss Functions
- MSELoss
- CrossEntropyLoss (legacy)
- SoftmaxCrossEntropyLoss (recommended)
- BCEWithLogitsLoss (recommended)
Optimizers
- SGD
- Adam
- AdamW
- Lion
Learning Rate Schedulers
- StepLR
- LambdaLR
- ReduceLROnPlateau
- Regularization
- Dropout (basic, educational)
- BatchNorm2D (experimental)
Utilities
- zeros
- randomMatrix
- dot
- addMatrices
- reshape
- stack
- flatten
- concat
- softmax
- crossEntropy
Model Container
- Sequential
---
Installation
`bash
npm install mini-jstorch
`
Node.js v18+ or any modern browser with ES module support is recommended.
---
Quick Start (Recommended Loss)
Multi-class Classification (SoftmaxCrossEntropy)
`javascript
import {
Sequential,
Linear,
ReLU,
SoftmaxCrossEntropyLoss,
Adam
} from "./src/jstorch.js";
const model = new Sequential([
new Linear(2, 4),
new ReLU(),
new Linear(4, 2) // logits output
]);
const X = [
[0,0], [0,1], [1,0], [1,1]
];
const Y = [
[1,0], [0,1], [0,1], [1,0]
];
const lossFn = new SoftmaxCrossEntropyLoss();
const optimizer = new Adam(model.parameters(), 0.1);
for (let epoch = 1; epoch <= 300; epoch++) {
const logits = model.forward(X);
const loss = lossFn.forward(logits, Y);
const grad = lossFn.backward();
model.backward(grad);
optimizer.step();
if (epoch % 50 === 0) {
console.log(Epoch ${epoch}, Loss: ${loss.toFixed(4)});
}
}
`
Do not combine SoftmaxCrossEntropyLoss with a Softmax layer.
Binary Classifiaction (BCEWithLogitsLoss)
`javascript
import {
Sequential,
Linear,
ReLU,
BCEWithLogitsLoss,
Adam
} from "./src/jstorch.js";
const model = new Sequential([
new Linear(2, 4),
new ReLU(),
new Linear(4, 1) // logit
]);
const X = [
[0,0], [0,1], [1,0], [1,1]
];
const Y = [
[0], [1], [1], [0]
];
const lossFn = new BCEWithLogitsLoss();
const optimizer = new Adam(model.parameters(), 0.1);
for (let epoch = 1; epoch <= 300; epoch++) {
const logits = model.forward(X);
const loss = lossFn.forward(logits, Y);
const grad = lossFn.backward();
model.backward(grad);
optimizer.step();
}
`
Do not combine BCEWithLogitsLoss with a Sigmoid layer.
---
Save & Load Models
`javascript
import { saveModel, loadModel, Sequential } from "mini-jstorch";
const json = saveModel(model);
const model2 = new Sequential([...]); // same architecture
loadModel(model2, json);
`
---
Demos
See the demo/ directory for runnable examples:
- demo/MakeModel.js – simple training loop
- demo/scheduler.js – learning rate schedulers
- demo/fu_fun.js – utility functions
`bash
node demo/MakeModel.js
node demo/scheduler.js
node demo/fu_fun.js
`
---
Design Notes & Limitations
- Training logic is 2D-first: [batch][features]`