SSPO | Self-Supervised Prompt Optimization
npm install sspoA Self-Supervised Prompt Optimization (SSPO) implemented in TypeScript. Research Paper
- š·ļø Zero Supervision - No ground truth/human feedback required
- ā” Universal Adaptation - Closed & open-ended tasks supported
- š Self-Evolving - Auto-optimization via LLM-as-judge mechanism
- Node.js
- npm
- OpenAI API key
``bash`
npm install
npm run build
Configure LLM parameters in src/config/config.yaml:
`yaml`
models:
gpt-4o-mini:
api_key: "your-api-key-here"
base_url: "https://api.openai.com/v1"
temperature: 1
top_p: 1
Create a template file in src/settings/template.yaml:
`yaml
prompt: |
Please solve the following problem.
requirements: |
Generate more detailed explanations and use clear reasoning steps.
count: 50
qa:
- question: |
What is 2 + 2?
answer: |
4
- question: |
Explain photosynthesis.
answer: |
Photosynthesis is the process by which plants convert sunlight into energy...
`
#### Option 1: Command Line Interface
`bashBasic usage
npm run optimize
Available options:
`bash
--opt-model Model for optimization (default: gpt-4o-mini)
--opt-temp Temperature for optimization (default: 0.7)
--eval-model Model for evaluation (default: gpt-4o-mini)
--eval-temp Temperature for evaluation (default: 0.3)
--exec-model Model for execution (default: gpt-4o-mini)
--exec-temp Temperature for execution (default: 0)
--workspace Output directory path (default: workspace)
--initial-round Initial round number (default: 1)
--max-rounds Maximum number of rounds (default: 10)
--template Template file name (default: Poem.yaml)
--name Project name (default: Poem)
--mode Execution model mode: base_model or reasoning_model (default: base_model)
`#### Option 2: Programmatic Usage
`typescript
import { SPO_LLM, PromptOptimizer } from './src';// Initialize LLM settings
SPO_LLM.initialize(
{ model: 'gpt-4o-mini', temperature: 0.7 },
{ model: 'gpt-4o-mini', temperature: 0.3 },
{ model: 'gpt-4o-mini', temperature: 0 },
'base_model'
);
// Create and run optimizer
const optimizer = new PromptOptimizer(
'workspace', // Output directory
1, // Starting round
10, // Maximum optimization rounds
'poem', // Project name
'Poem.yaml' // Template file
);
await optimizer.optimize();
`$3
`
workspace/
āāā poem/
āāā prompts/
āāā results.json
āāā round_1/
ā āāā answers.txt
ā āāā prompt.txt
āāā round_2/
ā āāā answers.txt
ā āāā prompt.txt
āāā ...
`-
results.json: Optimization history and success metrics
- prompt.txt: Optimized prompt for each round
- answers.txt: Generated outputs using the promptš ļø Development
$3
`
src/
āāā components/ # Core optimization components
ā āāā optimizer.ts # Main PromptOptimizer class
ā āāā evaluator.ts # Evaluation logic
āāā utils/ # Utility modules
ā āāā llmClient.ts # LLM interface and management
ā āāā dataUtils.ts # Data handling and persistence
ā āāā promptUtils.ts # Prompt file operations
ā āāā evaluationUtils.ts # Evaluation utilities
ā āāā load.ts # Configuration loading
āāā llm/ # LLM implementations
ā āāā asyncLlm.ts # Async LLM client
āāā prompts/ # Prompt templates
ā āāā optimizePrompt.ts
ā āāā evaluatePrompt.ts
āāā config/ # Configuration files
āāā settings/ # Task templates
āāā optimize.ts # CLI interface
āāā index.ts # Main entry point
`$3
`bash
npm run build # Compile TypeScript
npm run start # Run compiled application
npm run dev # Run with ts-node (development)
npm run optimize # Run CLI optimizer
npm run clean # Clean build directory
``This project is licensed under the MIT License - see the LICENSE file for details.