Neuromorphic computing with Spiking Neural Networks, STDP learning, and reservoir computing for ultra-low-power ML
npm install @neural-trader/example-neuromorphic-computingNeuromorphic computing with Spiking Neural Networks (SNNs), STDP learning, and reservoir computing for ultra-low-power machine learning.
This package demonstrates neuromorphic computing principles using event-driven computation, biological learning rules, and swarm-based topology optimization. It's designed for applications requiring temporal processing, pattern recognition, and energy-efficient machine learning.
``bash`
npm install @neural-trader/example-neuromorphic-computing
`typescript
import {
SpikingNeuralNetwork,
createSTDPLearner,
} from '@neural-trader/example-neuromorphic-computing';
// Create network
const network = new SpikingNeuralNetwork(20);
network.connectFullyRandom([-0.5, 0.5]);
// Create STDP learner
const learner = createSTDPLearner('default');
// Training patterns
const patterns = [
[0, 1, 2, 3, 4], // Pattern A
[5, 6, 7, 8, 9], // Pattern B
[10, 11, 12, 13, 14], // Pattern C
];
// Train with STDP
patterns.forEach((pattern) => {
learner.train(network, pattern, 100);
});
// Test recognition
network.reset();
network.injectPattern(patterns[0]);
const spikes = network.simulate(100);
console.log(Generated ${spikes.length} spikes);`
`typescript
import { createLSM } from '@neural-trader/example-neuromorphic-computing';
// Create Liquid State Machine
const lsm = createLSM('medium', 10, 3);
// Prepare training data
const inputs = [
[1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
[0, 1, 0, 1, 0, 1, 0, 1, 0, 1],
// ... more patterns
];
const targets = [
[1, 0, 0], // Class 0
[0, 1, 0], // Class 1
// ... more labels
];
// Train readout layer
const error = lsm.trainReadout(inputs, targets, 50);
console.log(Training MSE: ${error.toFixed(4)});
// Make predictions
const prediction = lsm.predict(inputs[0], 50);
console.log('Prediction:', prediction);
`
`typescript
import {
SwarmTopologyOptimizer,
patternRecognitionFitness,
FitnessTask,
} from '@neural-trader/example-neuromorphic-computing';
// Define optimization task
const task: FitnessTask = {
inputs: [[0, 1, 2], [3, 4, 5], [6, 7, 8]],
targets: [[1, 0, 0], [0, 1, 0], [0, 0, 1]],
evaluate: patternRecognitionFitness,
};
// Create optimizer
const optimizer = new SwarmTopologyOptimizer(10, {
swarm_size: 20,
max_iterations: 50,
});
// Optimize topology
const history = optimizer.optimize(task);
console.log(Best fitness: ${optimizer.getBestFitness()});Optimal connections: ${optimizer.getBestTopology().length}
console.log();
// Create optimized network
const optimized_network = optimizer.createOptimizedNetwork();
`
`typescript
import { NeuromorphicAgent } from '@neural-trader/example-neuromorphic-computing';
// Create agent with AgentDB
const agent = new NeuromorphicAgent('./neuromorphic.db');
// Store network state
await agent.storeNetwork('my_network', network);
// Store STDP learner
await agent.storeSTDP('my_learner', learner);
// Store LSM
await agent.storeLSM('my_lsm', lsm);
// Store optimized topology
await agent.storeTopology('my_topology', optimizer);
// Retrieve stored data
const retrieved = await agent.retrieve('network:my_network');
// Find similar networks
const similar = await agent.findSimilarNetworks(network, 5);
await agent.close();
`
`typescript
// Create network
const network = new SpikingNeuralNetwork(num_neurons, neuron_params?);
// Add connections
network.addConnection(source, target, weight, delay);
network.connectFullyRandom(weight_range);
// Inject spikes
network.injectSpike(neuron_id, time?);
network.injectPattern(pattern);
// Simulate
const spikes = network.simulate(duration, dt);
// State management
network.reset();
const state = network.getState();
`
`typescript
const neuron = new LIFNeuron({
tau_m: 20.0, // Membrane time constant (ms)
v_rest: -70.0, // Resting potential (mV)
v_threshold: -55.0, // Firing threshold (mV)
v_reset: -75.0, // Reset potential (mV)
t_refrac: 2.0, // Refractory period (ms)
});
const fired = neuron.update(current_time, input_current, dt);
const potential = neuron.getMembranePotential();
`
`typescript
const learner = createSTDPLearner('default' | 'strong' | 'weak');
// Train on single pattern
const result = learner.train(network, pattern, duration);
// Train on multiple patterns
const history = learner.trainMultipleEpochs(
network,
patterns,
epochs,
duration
);
`
`typescript
const lsm = createLSM('small' | 'medium' | 'large', input_size, output_size);
// Process input
const state = lsm.processInput(input, duration);
// Forward pass
const output = lsm.forward(input, duration);
// Train readout
const error = lsm.trainReadout(train_inputs, train_targets, duration);
// Evaluate
const { mse, accuracy } = lsm.evaluate(test_inputs, test_targets, duration);
`
`typescript
const optimizer = new SwarmTopologyOptimizer(network_size, {
swarm_size: 20,
max_connections: 100,
max_iterations: 50,
});
const history = optimizer.optimize(task);
const topology = optimizer.getBestTopology();
const network = optimizer.createOptimizedNetwork();
const json = optimizer.exportTopology();
`
`typescript
function myCustomFitness(
network: SpikingNeuralNetwork,
inputs: number[][],
targets: number[][]
): number {
let total_score = 0;
inputs.forEach((input, idx) => {
network.reset();
network.injectPattern(input);
const spikes = network.simulate(100);
// Your custom evaluation logic
const score = evaluateSpikes(spikes, targets[idx]);
total_score += score;
});
return total_score / inputs.length;
}
`
`typescript
const fast_neuron = new LIFNeuron({
tau_m: 5.0, // Fast dynamics
v_threshold: -60.0, // Easy to fire
t_refrac: 1.0, // Short refractory
});
const slow_neuron = new LIFNeuron({
tau_m: 40.0, // Slow dynamics
v_threshold: -50.0, // Hard to fire
t_refrac: 5.0, // Long refractory
});
`
`typescript
// Create layers
const input_layer = new SpikingNeuralNetwork(10);
const hidden_layer = new SpikingNeuralNetwork(50);
const output_layer = new SpikingNeuralNetwork(5);
// Connect layers (manually coordinate simulation)
// In production, use a more sophisticated orchestration
`
`bashBuild the package
npm run build
Testing
The package includes comprehensive tests covering:
- ✅ LIF neuron dynamics
- ✅ Network connectivity
- ✅ Spike propagation
- ✅ STDP learning rules
- ✅ Reservoir computing
- ✅ Topology optimization
- ✅ Pattern recognition tasks
Run tests:
`bash
npm test
`Dependencies
-
@neural-trader/agentdb: Vector database for network state persistence
- @neural-trader/agentic-flow`: Multi-agent orchestration (planned)1. Use sparse connectivity: Dense networks are computationally expensive
2. Tune simulation timestep: Smaller dt is more accurate but slower
3. Batch training: Process multiple patterns before updating weights
4. Prune weak connections: Remove synapses below threshold
5. Use quantization: Reduce weight precision for faster inference
This implementation is designed to map to neuromorphic hardware:
- Intel Loihi: 130,000 neurons, 130M synapses
- IBM TrueNorth: 1M neurons, 256M synapses
- BrainScaleS: Mixed-signal analog/digital
- SpiNNaker: ARM-based digital spikes
- [ ] Multi-compartment neuron models
- [ ] Homeostatic plasticity
- [ ] Short-term plasticity (STP)
- [ ] Reward-modulated STDP
- [ ] Convolutional spike layers
- [ ] GPU acceleration
- [ ] NAPI-RS native bindings
1. Gerstner, W., & Kistler, W. M. (2002). Spiking Neuron Models
2. Maass, W., Natschläger, T., & Markram, H. (2002). Real-time computing without stable states: A new framework for neural computation
3. Bi, G. Q., & Poo, M. M. (1998). Synaptic modifications in cultured hippocampal neurons
4. Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization
MIT
Contributions welcome! Please open an issue or PR.
Neural Trader Team
neuromorphic, spiking-neural-network, snn, stdp, reservoir-computing, liquid-state-machine, event-driven, low-power-ml, temporal-processing, swarm-optimization, agentdb