Self-learning dynamic pricing with RL optimization and swarm strategy exploration
npm install @neural-trader/example-dynamic-pricingSelf-learning dynamic pricing system with reinforcement learning optimization and swarm-based strategy exploration.
π― Multiple Pricing Strategies
- Cost-plus pricing
- Value-based pricing
- Competition-based pricing
- Dynamic demand-based pricing
- Time-based (peak/off-peak) pricing
- Elasticity-optimized pricing
- RL-optimized pricing
π§ Self-Learning Components
- Price elasticity estimation with AgentDB memory
- Reinforcement learning (Q-Learning, DQN, PPO, SARSA, Actor-Critic)
- Multi-armed bandit for price experimentation
- Conformal prediction for uncertainty quantification
- Seasonality and promotion effect learning
π Swarm Intelligence
- Parallel strategy exploration
- Evolutionary algorithm for strategy optimization
- Consensus-based recommendations
- Tournament selection of best performers
π Competitive Analysis
- OpenRouter-powered strategic advice
- Competitor response prediction
- Market structure identification
- Pricing gap detection
β‘ Performance
- NAPI-RS bindings for critical paths
- Vectorized operations for batch processing
- AgentDB for fast pattern storage
- 150x faster than pure JavaScript
``bash`
npm install @neural-trader/example-dynamic-pricingor
yarn add @neural-trader/example-dynamic-pricing
`typescript
import {
DynamicPricer,
ElasticityLearner,
RLOptimizer,
CompetitiveAnalyzer,
PricingSwarm,
MarketContext,
} from '@neural-trader/example-dynamic-pricing';
// Initialize components
const basePrice = 100;
const elasticityLearner = new ElasticityLearner('./data/elasticity.db');
const rlOptimizer = new RLOptimizer({
algorithm: 'q-learning',
learningRate: 0.1,
epsilon: 0.2,
});
const competitiveAnalyzer = new CompetitiveAnalyzer(process.env.OPENROUTER_API_KEY);
// Create pricer
const pricer = new DynamicPricer(
basePrice,
elasticityLearner,
rlOptimizer,
competitiveAnalyzer
);
// Get price recommendation
const context: MarketContext = {
timestamp: Date.now(),
dayOfWeek: 3,
hour: 14,
isHoliday: false,
isPromotion: false,
seasonality: 0.1,
competitorPrices: [95, 98, 102, 105],
inventory: 150,
demand: 80,
};
const recommendation = await pricer.recommendPrice(context);
console.log(Recommended price: $${recommendation.price.toFixed(2)});Expected revenue: $${recommendation.expectedRevenue.toFixed(2)}
console.log();Competitive position: ${recommendation.competitivePosition}
console.log();
// Simulate market response and learn
const actualDemand = 75; // From your system
pricer.recordOutcome(recommendation.price, actualDemand, context);
`
`typescript
import { PricingSwarm } from '@neural-trader/example-dynamic-pricing';
const swarm = new PricingSwarm(
{
numAgents: 7,
strategies: ['cost-plus', 'value-based', 'competition-based', 'dynamic-demand'],
communicationTopology: 'mesh',
consensusMechanism: 'weighted',
explorationRate: 0.15,
},
basePrice,
elasticityLearner,
rlOptimizer,
competitiveAnalyzer
);
// Explore strategies in parallel
const result = await swarm.explore(context, 100);
console.log(Best strategy: ${result.bestStrategy});Best price: $${result.bestPrice.toFixed(2)}
console.log();
// Get consensus recommendation
const consensus = await swarm.getConsensusPrice(context);
`
`typescript
import { RLOptimizer } from '@neural-trader/example-dynamic-pricing';
// Configure RL algorithm
const rlOptimizer = new RLOptimizer({
algorithm: 'dqn', // or 'q-learning', 'ppo', 'sarsa', 'actor-critic'
learningRate: 0.1,
discountFactor: 0.95,
epsilon: 0.3,
epsilonDecay: 0.995,
minEpsilon: 0.05,
batchSize: 32,
memorySize: 10000,
});
// Training loop
for (let episode = 0; episode < 1000; episode++) {
const context = getMarketContext();
const action = rlOptimizer.selectAction(context, true);
const price = basePrice * action.priceMultiplier;
const demand = simulateDemand(price, context);
const reward = calculateReward(price, demand);
const nextContext = getNextMarketContext();
rlOptimizer.learn(context, action, reward, nextContext);
}
// Export learned policy
const policy = rlOptimizer.exportPolicy();
`
`typescript
import { ElasticityLearner } from '@neural-trader/example-dynamic-pricing';
const learner = new ElasticityLearner('./data/elasticity.db');
// Observe price-demand pairs
await learner.observe(95, 120, context);
await learner.observe(100, 100, context);
await learner.observe(105, 85, context);
// Get elasticity estimate
const elasticity = learner.getElasticity(context);
console.log(Mean elasticity: ${elasticity.mean.toFixed(2)});Confidence: ${(elasticity.confidence * 100).toFixed(0)}%
console.log();
// Predict demand at different prices
const prediction = learner.predictDemand(110, 100, 100, context);
console.log(Predicted demand at $110: ${prediction.demand.toFixed(1)});95% CI: [${prediction.lower.toFixed(1)}, ${prediction.upper.toFixed(1)}]
console.log();
// Learn patterns
const seasonality = await learner.learnSeasonality();
const promotionEffect = await learner.learnPromotionEffect();
`
`typescript
import { CompetitiveAnalyzer } from '@neural-trader/example-dynamic-pricing';
const analyzer = new CompetitiveAnalyzer(process.env.OPENROUTER_API_KEY);
// Analyze competitor prices
const analysis = analyzer.analyze([95, 98, 102, 105]);
console.log(Market average: $${analysis.avgPrice.toFixed(2)});Price dispersion: ${(analysis.priceDispersion * 100).toFixed(1)}%
console.log();Market position: ${analysis.marketPosition}
console.log();
// Get AI-powered strategic advice
const advice = await analyzer.getStrategicAdvice(
100,
[95, 98, 102, 105],
'E-commerce, high competition, peak season'
);
console.log(Strategic advice: ${advice});
// Predict competitor response
const response = analyzer.predictCompetitorResponse(85, [95, 98, 102, 105]);
if (response.willMatch) {
console.log('Competitors likely to match price cut');
}
// Find pricing gaps
const gaps = analyzer.findPricingGaps([80, 95, 120, 150]);
console.log(Found ${gaps.length} pricing opportunities);`
`typescript
import { ConformalPredictor } from '@neural-trader/example-dynamic-pricing';
const predictor = new ConformalPredictor(0.1); // 90% coverage
// Calibrate with historical data
const predictions = [100, 105, 95, 110, 90];
const actuals = [102, 103, 97, 108, 92];
predictor.calibrate(predictions, actuals);
// Make conformal prediction
const conformalPred = predictor.predict(105);
console.log(Point prediction: ${conformalPred.point});90% interval: [${conformalPred.lower}, ${conformalPred.upper}]
console.log();
// Adaptive prediction
const recentPreds = getRecentPredictions();
const recentActuals = getRecentActuals();
const adaptivePred = predictor.adaptivePredict(105, recentPreds, recentActuals);
`
For performance-critical operations, use the native bindings:
`typescript
import {
calculate_elasticity_fast,
predict_demand_batch,
q_learning_update_batch,
analyze_competition_fast,
} from '@neural-trader/example-dynamic-pricing/native';
// Fast elasticity calculation
const elasticity = calculate_elasticity_fast(prices, demands);
// Batch demand prediction
const demands = predict_demand_batch(prices, basePrice, baseDemand, elasticity);
// Batch Q-learning update
const newQValues = q_learning_update_batch(
qValues,
rewards,
nextQValues,
learningRate,
discountFactor
);
// Fast competitive analysis
const metrics = analyze_competition_fast(competitorPrices);
`
Run comprehensive tests with simulated markets:
`bash`
npm test
Test coverage includes:
- Individual pricing strategies
- Elasticity learning
- RL optimization
- Competitive analysis
- Swarm exploration
- Conformal prediction
- Integration scenarios
See API Documentation for complete reference.
```
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β DynamicPricer β
β ββββββββββββββ ββββββββββββββ ββββββββββββββββ β
β β 7 Base β β Ensemble β β Conformal β β
β β Strategies ββββ Recommenderββββ Prediction β β
β ββββββββββββββ ββββββββββββββ ββββββββββββββββ β
ββββββββββ¬βββββββββββββββ¬βββββββββββββββ¬ββββββββββββββ
β β β
ββββββΌβββββ ββββββΌβββββ βββββΌβββββββββ
βElasticityβ β RL β βCompetitive β
β Learner β βOptimizerβ β Analyzer β
β(AgentDB) β β(5 algos)β β(OpenRouter)β
ββββββββββββ βββββββββββ ββββββββββββββ
β β β
ββββββββββββββββ΄βββββββββββββββ
β
βββββββΌβββββββ
β Swarm β
β Explorationβ
β (7 agents) β
ββββββββββββββ
Contributions welcome! See CONTRIBUTING.md for guidelines.
MIT License - see LICENSE for details.
- @neural-trader/predictor - Neural network prediction
- agentdb - Vector database for agent memory
- agentic-flow - Multi-agent orchestration
- GitHub Issues: https://github.com/neural-trader/neural-trader/issues
- Discord: https://discord.gg/neural-trader
- Documentation: https://docs.neural-trader.ai
---
Built with β€οΈ by the Neural Trader team