A JavaScript/TypeScript autograd engine with operator overloading, inspired by micrograd
npm install tinygradA JavaScript/TypeScript autograd engine with operator overloading, inspired by micrograd.
- 🔥 Automatic Differentiation: Full backpropagation support for scalar values
- ⚡ Operator Overloading: Natural mathematical syntax using JavaScript operator overloading
- 🧠 Neural Networks: Built-in neuron, layer, and MLP implementations
- 📦 Lightweight: Zero dependencies for the core library
- 🎯 TypeScript: Fully typed with excellent IDE support
- 🌐 Universal: Works in browsers and Node.js
``bash`
npm install tinygrador
pnpm add tinygrador
yarn add tinygrador
bun add tinygrad
`typescript
"use operator overloading";
import { engine, nn } from "tinygrad";
const { Value } = engine;
const { MLP } = nn;
// Scalar operations with automatic differentiation
const a = new Value(2.0);
const b = new Value(-3.0);
const c = new Value(10.0);
const e = a * b;
const d = e + c;
const f = d.relu();
// Compute gradients
f.backward();
console.log(f.data); // 4.0
console.log(a.grad); // -3.0
console.log(b.grad); // 2.0
// Build a neural network
const model = new MLP(3, [4, 4, 1]); // 3 inputs, 2 hidden layers of 4 neurons, 1 output
const x = [
new Value(2.0),
new Value(3.0),
new Value(-1.0)
];
const output = model.call(x);
console.log(output.data); // Forward pass result
`
The Value class represents a scalar value with gradient tracking.
Constructor:
`typescript`
new Value(data: number, children?: Value[], _op?: string)
Supported Operations:
- add(other) or + - Additionsub(other)
- or - - Subtractionmul(other)
- or * - Multiplicationdiv(other)
- or / - Divisionpow(n)
- or ** - Powerneg()
- or unary - - Negationrelu()
- - ReLU activation
Methods:
- backward() - Compute gradients via backpropagation
#### Neuron
`typescript`
new Neuron(nin: number, nonlin: boolean = true)
#### Layer
`typescript`
new Layer(nin: number, nout: number, nonlin: boolean = true)
#### MLP (Multi-Layer Perceptron)
`typescript`
new MLP(nin: number, nouts: number[])
Methods:
- call(x: Value[]) - Forward passparameters()
- - Get all trainable parameterszeroGrad()
- - Reset gradients to zero
TinyGrad uses the unplugin-op-overloading plugin to enable natural mathematical syntax. Add the following to the top of your file:
`typescript`
"use operator overloading";
This enables:
`typescript`
const x = new Value(2);
const y = new Value(3);
const z = x y + x * 2; // Much cleaner than z = x.mul(y).add(x.pow(2))
`typescript
"use operator overloading";
import { engine, nn } from "tinygrad";
const { Value } = engine;
const { MLP } = nn;
// Dataset
const X = [[2, 3, -1], [3, -1, 0.5], [0.5, 1, 1], [1, 1, -1]];
const y = [1, -1, -1, 1]; // targets
const model = new MLP(3, [4, 4, 1]);
// Training loop
for (let i = 0; i < 100; i++) {
// Forward pass
const inputs = X.map(row => row.map(x => new Value(x)));
const scores = inputs.map(x => model.call(x));
// Loss (MSE)
let loss = new Value(0);
for (let j = 0; j < y.length; j++) {
const diff = scores[j] - new Value(y[j]);
loss = loss + diff * diff;
}
// Backward pass
model.zeroGrad();
loss.backward();
// Update (SGD)
const lr = 0.01;
for (const p of model.parameters()) {
p.data -= lr * p.grad;
}
if (i % 10 === 0) {
console.log(Step ${i}, Loss: ${loss.data});`
}
}
Check out the interactive demo to see TinyGrad in action with:
- Real-time visualization of training progress
- Decision boundary visualization
- Interactive controls for learning rate and training steps
`bashInstall dependencies
bun install
MIT
Inspired by micrograd by Andrej Karpathy.