A WebGPU-powered neural network library built on simple-compute-shaders.
npm install scs-neuralA high-performance, memory-efficient neural network library powered by WebGPU.
scs-neural is built for speed and efficiency, leveraging the parallel processing power of WebGPU. It is designed with a "performance-first" philosophy, prioritizing low-level buffer management and optimized compute shaders over high-level developer experience (DX).
simple-compute-shaders for direct WGSL control.This library does not use standard tensor libraries. Instead:
1. Buffer Swapping: We use a "ping-pong" technique where two buffers are swapped between layers to avoid unnecessary allocations during inference and training.
2. Direct Memory Access: Inputs and outputs are handled as flat Float32Array buffers, mapped directly to GPU memory.
3. Minimized Host-Device Sync: Training loops and genetic evaluations are designed to keep data on the GPU as much as possible, reducing the bottleneck of CPU-GPU communication.
``bash`
npm install scs-neural
Define your network architecture using the layers configuration.
#### Simple FFNN (Feed-Forward Neural Network)
`typescript
import { NeuralNetwork, ActivationType, LayerType } from "scs-neural";
const nn = new NeuralNetwork({
layers: [
{ type: LayerType.INPUT, shape: [3] },
{ type: LayerType.DENSE, size: 12 },
{ type: LayerType.DENSE, size: 3 },
],
trainingBatchSize: 10,
testingBatchSize: 1,
hiddenActivationType: ActivationType.RELU,
outputActivationType: ActivationType.LINEAR,
});
await nn.initialize("xavier");
`
#### CNN (Convolutional Neural Network)
`typescript
const nn = new NeuralNetwork({
layers: [
{ type: LayerType.INPUT, shape: [28, 28, 1] }, // [height, width, channels]
{ type: LayerType.CONV2D, kernelSize: 3, filters: 16, padding: 1, activation: ActivationType.RELU },
{ type: LayerType.MAXPOOL2D, poolSize: 2 },
{ type: LayerType.FLATTEN },
{ type: LayerType.DENSE, size: 10 },
],
trainingBatchSize: 32,
});
await nn.initialize("he");
`
#### Layer Types
| Layer Type | Properties | Description |
|------------|------------|-------------|
| INPUT | shape: number[] | Input dimensions, e.g., [size] or [h, w, c]. |DENSE
| | size: number, activation? | Fully connected layer. |CONV2D
| | kernelSize, filters, stride?, padding?, activation? | 2D Convolutional layer. |MAXPOOL2D
| | poolSize, stride? | 2D Max Pooling layer. |FLATTEN
| | (none) | Flattens multi-dimensional input for Dense layers. |
#### Activation Types
- RELU, SIGMOID, TANH, SOFTMAX, LINEAR
#### Initialization Methods
- xavier: Good for Sigmoid/Tanh.he
- : Recommended for ReLU.uniform
- : Random values between -0.1 and 0.1.zero
- : Initializes all parameters to 0.
Run a single inference pass. The library handles the internal ping-ponging of buffers.
`typescript`
const input = new Float32Array([...]); // Flat array matching input shape
const output = await nn.forwardPass(input);
console.log("Network Output:", output);
Train the network using supervised learning. The train method handles batching and shuffling.
`typescriptEpoch ${epoch} - Loss: ${loss}
await nn.train({
inputActivations: [...], // Array of Float32Arrays
targetActivations: [...],
epochs: 10,
learningRate: 0.001,
progressCallback: (epoch, loss) => {
console.log();`
}
});
scs-neural excels at evaluating large populations in parallel, which is ideal for reinforcement learning or neuroevolution.
`typescript`
// populationWeights[layerIndex][genomeIndex] = Float32Array
const { activations } = await nn.evaluatePopulation({
populationSize: 100,
batchSize: 1,
weights: populationWeights,
biases: populationBiases,
inputs: populationInputs, // Array of Float32Arrays [populationSize]
returnActivations: true
});
#### GA Implementation Notes
- Weight Structure: Weights and biases must be provided as a 2D array: [layerIndex][genomeIndex]. layerIndex corresponds to the layer position in your architecture (skipping the input layer).DENSE
- Supported Layers: Currently supports , CONV2D, and FLATTEN for genetic evaluation.MAXPOOL2D
- Limitations: is currently only supported for standard backpropagation training and is not yet available for genetic evaluation.
This repository contains several examples showcasing the library:
- Ant Warfare (New): A complex simulation of two competing ant colonies evolving strategies via CNN-based neuroevolution.
- Flappy Bird Genetic Algorithm: Evolving simple neural networks to play Flappy Bird.
- Color Inversion: Standard supervised training to invert RGB colors.
To run the examples locally:
`bash`
npm install
npm run dev
Then open the URL provided by Vite.
The core library is located in src/.
- Shaders: Located in src/shaders/, these WGSL files handle the core mathematical operations including Conv2D and MaxPool.StorageBuffer` objects for weights, biases, gradients, and activations for every layer to avoid runtime allocations and garbage collection overhead.
- Buffer Management: The library maintains dedicated
ISC