Core explainability algorithms and model interfaces for ExplainAI
npm install explainai-core
Core explainability algorithms and model interfaces for ExplainAI.


``bash`
npm install explainai-core
- 🔍 SHAP (SHapley Additive exPlanations) - Model-agnostic feature importance
- 🎯 LIME (Local Interpretable Model-agnostic Explanations) - Local explanations
- 🌐 Universal Model Support - Works with any prediction function
- ⚡ High Performance - Optimized sampling and computation
- 📦 Zero Dependencies - Lightweight and standalone
- 🔒 Privacy-First - All computation runs locally
`typescript
import { explain, createApiModel } from 'explainai-core';
// Create a model that calls your API
const model = createApiModel(
{
endpoint: 'http://localhost:3000/predict',
method: 'POST',
headers: { 'Content-Type': 'application/json' }
},
{
inputShape: [10],
outputShape: [1],
modelType: 'regression',
provider: 'api'
}
);
// Generate SHAP explanation
const explanation = await explain(model, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], {
method: 'shap',
config: {
samples: 100
}
});
console.log(explanation);
// {
// method: 'shap',
// featureImportance: [
// { feature: 0, importance: 0.45, ... },
// { feature: 1, importance: -0.23, ... },
// ...
// ],
// prediction: { value: 42.5 },
// baseValue: 38.2
// }
`
#### explain(model, input, options)
Generate explanations for model predictions.
`typescript`
const explanation = await explain(model, input, {
method: 'shap' | 'lime',
config: {
samples: 100,
featureNames?: string[]
}
});
#### createApiModel(apiConfig, metadata)
Create a model wrapper for REST API endpoints.
`typescript`
const model = createApiModel(
{
endpoint: 'https://api.example.com/predict',
method: 'POST',
headers: { 'Authorization': 'Bearer token' }
},
{
inputShape: [10],
outputShape: [1],
modelType: 'classification',
provider: 'api'
}
);
#### createCustomModel(predictFn, metadata)
Wrap any prediction function.
`typescript`
const model = createCustomModel(
async (input: number[]) => {
// Your custom prediction logic
return input.reduce((a, b) => a + b, 0);
},
{
inputShape: [10],
outputShape: [1],
modelType: 'regression',
provider: 'custom'
}
);
#### SHAP (Shapley Values)
`typescript
import { explainWithShap } from 'explainai-core';
const explanation = await explainWithShap(model, input, {
samples: 100,
featureNames: ['feature1', 'feature2', ...]
});
`
Best for:
- Global feature importance
- Understanding overall model behavior
- Additive feature contributions
#### LIME (Local Interpretable Model)
`typescript
import { explainWithLime } from 'explainai-core';
const explanation = await explainWithLime(model, input, {
samples: 100,
featureNames: ['feature1', 'feature2', ...]
});
`
Best for:
- Local explanations (individual predictions)
- Understanding specific decisions
- Fast approximations
`typescript`
const model = createApiModel(apiConfig, {
modelType: 'classification',
inputShape: [784], // e.g., 28x28 image flattened
outputShape: [10], // 10 classes
provider: 'api'
});
`typescript`
const model = createApiModel(apiConfig, {
modelType: 'regression',
inputShape: [13], // e.g., housing features
outputShape: [1], // single value prediction
provider: 'api'
});
`typescript
import { createCustomModel, explain } from 'explainai-core';
// Wrap TensorFlow.js model
const tfModel = await tf.loadLayersModel('model.json');
const model = createCustomModel(
async (input: number[]) => {
const tensor = tf.tensor2d([input]);
const prediction = tfModel.predict(tensor) as tf.Tensor;
return prediction.dataSync()[0];
},
metadata
);
const explanation = await explain(model, input, { method: 'shap' });
`
`typescript
import { batchPredict } from 'explainai-core';
const inputs = [
[1, 2, 3, 4, 5],
[6, 7, 8, 9, 10],
[11, 12, 13, 14, 15]
];
const predictions = await batchPredict(model, inputs);
`
Full TypeScript definitions included:
`typescript`
import type {
Model,
Explanation,
ExplainabilityMethod,
FeatureImportance,
ModelMetadata,
InputData,
PredictionResult
} from 'explainai-core';
1. Sample Size: More samples = more accurate but slower
- SHAP: 100-500 samples for most cases
- LIME: 50-200 samples usually sufficient
2. Batch Processing: Use batchPredict for multiple inputs
3. Caching: Cache model predictions when possible
`typescript
import { ExplainAIError } from 'explainai-core';
try {
const explanation = await explain(model, input, options);
} catch (error) {
if (error instanceof ExplainAIError) {
console.error('ExplainAI Error:', error.message);
console.error('Details:', error.details);
}
}
``
- explainai-ui - React visualization components
- explainai-node - Node.js CLI tools
- explainai-playground - Interactive demo
- Full Documentation
- API Reference
- Examples
- Getting Started Guide
- Node.js ≥18.0.0
- TypeScript ≥5.0.0 (for TypeScript projects)
MIT - see LICENSE
Contributions welcome! See Contributing Guide
Yash Gupta (@gyash1512)