* Programmable Prompt Engineering (PPE) language is a simple and natural scripting language designed for handling prompt information. This language is used to develop various agents that can be reused, inherited, combined, or called. * Achieve or approximate the performance of ChatGPT 4 with open-source LLMs of medium to small scale (35B-4B parameters). * User-friendly for ai development and creation of intelligent applications... * Low-code or even no-code solutions for rapid ai development... * Flexible, adding custom instructions within scripts, and inter-script calls... * The data is completely open to the script, and the input and output data, even the internal data, can be freely accessed in the script * Powerful, enabling event transmission seamlessly between client and server with numerous utility functions... * Secure, supporting encrypted execution and usage limits for scripts(TODO)... * Enable the local deployment and execution of large language models (LLMs) such as LLaMA, Qwen, Gemma, Phi, GLM, Mistral, and more. * The AI Agent Script follows the Programmable Prompt Engine Specification. * Visit the site for the detailed AI Agent script usage. * PPE Fixtures Unit Test * Unit Test Fixture Demo: https://github.com/offline-ai/cli/tree/main/examples/split-text-paragraphs * Smart caching of LLM large models and intelligent agent invocation results to accelerate execution and reduce token expenses. * Support for Multi LLM Service Providers: * (Recommended) Builtin local LLM provider(llama.cpp) as default to protect the security and privacy of the knowledge. * Download GGUF model file first: ai brain download hf://bartowski/Qwen_QwQ-32B-GGUF -q q4_0 * Run with the default brain model file: ai run example.ai.yaml * Run with specified the model file: ai run example.ai.yaml -P local://bartowski-qwq-32b.Q4_0.gguf * OpenAI Compatible Service Provider: * OpenAI: ai run example.ai.yaml -P openai://chatgpt-4o-latest --apiKey βsk-XXXβ * DeepSeek: ai run example.ai.yaml -P openai://deepseek-chat -u https://api.deepseek.com/ --apiKey βsk-XXXβ * Siliconflow: ai run example.ai.yaml -P openai://Qwen/Qwen2.5-Coder-7B-Instruct -u https://api.siliconflow.cn/ --apiKey βsk-XXXβ * Anthropic(Claude): ai run example.ai.yaml -P openai://claude-3-7-sonnet-latest -u https://api.anthropic.com/v1/ --apiKey βsk-XXXβ * llama-cpp Server(llama-server) Provider: ai run example.ai.yaml -P llamacpp * llama-cpp Server does not support specifying model name, It is specified with the model parameter when llama-server is started. You can specify or arbitrarily switch LLM model or provider* in the PPE script.
``yaml --- parameters: model: openai://deepseek-chat apiUrl: https://api.deepseek.com/ apiKey: "sk-XXX" --- system: You are a helpful assistant. user: "tell me a joke" --- assistant: "[[AI]]" --- assistant: "[[AI:model='local://bartowski-qwq-32b.Q4_0.gguf']]" `
* Builtin local LLM provider(llama.cpp) Features: * By default, it automatically detects memory and GPU, and uses the best computing layer by default. It automatically allocates gpu - layers and context window size (it will adopt the largest possible value) to get the best performance from the hardware without manually configuring anything. * It is recommended to configure the context window yourself. * System security: Support for system template anti-injection (to avoid jailbreaking). * Support for general tool invocation (Tool Funcs) of any LLM models (only for builtin local LLM provider): * Can be supported without specific training of LLM, requiring LLM can accurately follow instructions. * Minimum adaptation for 3B model, recommended to use 7B and above. * Dual permission control: 1. Scripts set the list of tools AI can use. 2. Users set the list of tools scripts can use. * Support for General Thinking Mode (shouldThink) of large models (only for builtin local LLM provider): * Can be supported without specific training of LLM, requiring LLM can accurately follow instructions. * Answer first then think (last). * Think first then answer(first). * Think deeply then answer(deep): 7B and above. * Package support. * PPE supports direct invocation of wasm. * Support for multiple structured response output format types(response_format.type): * JSON format. * YAML format. * Natural Language Object(NOBJ) format. * Set output with JSON Schema format.PPE will automatically parse the content generated by AI in the corresponding format into an Object for code use.
Developing an intelligent application with AI Agent Script Engine involves just three steps:
* Choose an appropriate brainπ§ (LLM Large Language Model) * Select a parameter size based on your application's requirements; larger sizes offer better quality but consume more resources and increase response time... * Choose the model's expertise: Different models are trained with distinct methods and datasets, resulting in unique capabilities... * Optimize quantization: Higher levels of quantization (compression) result in faster speed and smaller size, but potentially lower accuracy... * Decide on the optimal context window size (max_tokens): Typically, 2048 is sufficient; this parameter also influences model performance... * Use the client (@offline-ai/cli) directly to download the AI brain: ai brain download * Create the ai application's agent script file and debug prompts using the client (@offline-ai/cli): ai run your_script.ai.yaml --interactive --loglevel info. * Integrate the script into your ai application. * One-click packaging into standalone intelligent applications (TODO)
* Quick Start Programming Guide * More examples * AI Applications written in PPE Language: * AI Guide App For PPE Guide - WIP * ai run guide in the project root folder to run the guide * AI Terminal Shell * LLM Inference Providers: * llamacpp: llama.cpp server as the default local LLM provider. If no provider is specified, llamacpp is used. * openai: Also supports OpenAI-compatible service API providers. * --provider openai://chatgpt-4o-latest --apiKey βsk-XXXβ
Note: Limitations of OpenAI-Compatible Service API Providers
1. OpenAI must be a large model (gpt-4o) released after 2024-07-18 to support json-schema. Before this date, only json is guaranteed, not the json-schema. 2. All siliconflow models only guarantee json support, not json-schema support. 3. [[Fruit:|Apple|Banana]]: Syntax for forcing AI to choose either Apple or Banana will be invalid.
ai is the shell CLI command to manage the brain(LLM) files and run a PPE agent script mainly.
Run script file command ai run, eg, ai run -f calculator.ai.yaml "{content: '32+1253'}" * -f is used to specify the script file. {content: '32+1253'} is the optional json input to the script. * Scripts will display intermediate echo outputs during processing when streaming output. This can be controlled with --streamEcho true|line|false. To keep the displayed echo outputs, use --no-consoleClear. * Script can be single YAML file (.ai.yaml) or directory. * Directory must have an entry point script file with the same name as the directory. Other scripts in the directory can call each other. * Manage the brain files command ai brain include ai brain download, ai brain list/search. * Run ai help or ai help [command] to get more.
Programmable Prompt Engine (PPE) Language is a message-processing language, similar to the YAML format.
PPE is designed to define AI prompt messages and their input/output configurations. It allows for the creation of a reusable and programmable prompt system akin to software engineering practices.
$3
* Message-Based Dialogue: Defines interactions as a series of messages with roles (system, user, assistant). * YAML-Like: Syntax is similar to YAML, making it readable and easy to understand. Dialogue Separation: Uses triple dashes (---) or asterisks (**) to clearly mark dialogue turns.
$3
* Input/Output Configuration (Front-Matter): Defines input requirements (using input keyword) and expected output format (using output keyword with JSON Schema). * Prompt Template: Embeds variables from input configuration or prompt settings into messages using Jinja2 templates ({{variable_name}}). * Custom Script Types: Allows defining reusable script types (type: type) for code and configuration inheritance.
$3
* Advanced AI Replacement: Use double brackets ([[Response]]) to trigger AI execution, store the response in a variable (prompt.Response), and use it within the script. * AI Parameter Control: Fine-tune AI behavior by passing parameters within double brackets (e.g., [[Answer:temperature=0.7]]). * Constrained AI Responses: Limit AI outputs to a predefined set of options (e.g., [[FRUITS:|Apple|Banana]]).
The role messages can be formatted using Jinja2 templates and advanced replacement features.
* Jinja2 Templates: Reference variables from input configuration or prompt settings using double curly braces (e.g., {{name}}). * Advanced AI Replacement: As described above, triggers AI execution and stores the response. * External Script Replacement: Invoke external scripts using the @ symbol (e.g., @say_hi_script(param1=value1)). * Internal Instruction Replacement: Call internal instructions similarly (e.g., @$instruction(param1=value1)). * Regular Expression Replacement: Use /RegExp/[RegOpts]:Answer[:index_or_group_name] for pattern-based replacement on the Answer variable.
$3
* Chaining Outputs: The -> operator connect script outputs to subsequent instructions or scripts, creating complex workflows. * Instruction Invocation: The $ prefix calls script instructions (e.g., $fn: {param1:value1} or $fn(param1=value1)). * Control Flow: Directives like $if, $pipe, $while, $match provide control flow mechanisms. * Event-Driven Architecture: Functions like $on, $once, $emit and $off enable event-based programming for flexible script behavior. * Script Extension: * The !fn directive allows declaring JavaScript functions to extend script functionality. * import configuration allows importing external scripts and modules.
`bash #install again npm install -g @offline-ai/cli `
Run
run your ai agent script, eg, the Dobby character:
`bash $ai run --interactive --script examples/char-dobby `
run the translator script lib directly:
`bash
API mode, translate the TODO file to English
$ai run -f translator "{file: './TODO', target: 'English'}"
interactive mode
$ai run -if translator `
Usage
Install the CLI globally:
`sh-session $ npm install -g @offline-ai/cli $ ai COMMAND running command... $ ai (--version) @offline-ai/cli/0.10.4 linux-x64 node-v20.18.3 $ ai --help [COMMAND] USAGE $ ai COMMAND ... `
Search and Download a brain(LLM) on huggingface.
Choose one to download, or type more to reduce the brain(models) list
Note:
* All quantification (compression) brain π§ models are uploaded by the user by themselves, so it cannot guarantee that these user quantitative (compressed) brain π§ models can be used * At present, the GGUF quantitative brain π§ model has been tens of thousands, and many of them are repeated. *
AI Brain List Display the brain list, which is part of the list filtered by the featured. If you want to display all the brain list, use --no-onlyFeatured option.`bash #list the downloaded brain list #=ai brain list --downloaded $ai brain $ai brain list --downloaded 1. name: "deepseek-v2-chat", likes: 17, downloads: 1189, hf_repo: "leafspark/DeepSeek-V2-Chat-GGUF" * IQ2_XXS: deepseek-v2-chat.IQ2_XXS-00001-of-00003.gguf (3 files) * IQ3_XS: deepseek-v2-chat.IQ3_XS-00001-of-00008.gguf (8 files) * Q2_K: deepseek-v2-chat.Q2_K-00001-of-00005.gguf (5 files) * Q3_K_M: deepseek-v2-chat.Q3_K_M-00001-of-00006.gguf (6 files) * Q5_K_M: deepseek-v2-chat.Q5_K_M-00001-of-00008.gguf (8 files) * Q6_K: deepseek-v2-chat.Q6_K-00001-of-00010.gguf (10 files) * Q8_0: deepseek-v2-chat.Q8_0-00001-of-00012.gguf (12 files) total: 1
#Download the brain, if there are multiple choices in the input keywords, you will be required to specify #LLAMA3-8B is the name of the brain model to be searched #
-q q4_0 is the quantification level of download. If it is not provided, it will be prompted to specify #--hubUrl is the mirror URL address of Huggingface $ai brain download llama3-8b -q Q4_0 --hubUrl=huggingface-mirror-url-address `
after download, get the brainDir from here:
`bash ai config brainDir { "brainDir": "~/.local/share/ai/brain" } `
You can create your config in
~/.config/ai/.ai.yaml or using json format: ~/.config/ai/.ai.json.
Download and run the LLM backend Server: llama.cpp
`bash mkdir llamacpp cd llamacpp wget https://github.com/ggerganov/llama.cpp/releases/download/b3091/llama-b3091-bin-ubuntu-x64.zip unzip llama-b3091-bin-ubuntu-x64.zip cd build/bin #run the server #-ngl 33 means GPU layers to load, adjust it according to your GPU. #-c 4096 means max context length #-t 4 means thread count ./server -t 4 -c 4096 -ngl 33 -m ~/.local/share/ai/brain/your-brain-model.gguf `
Now you can run your AI agent:
`bash #the .ai.yaml extension is optional. #defaults will search current working dir. you can config the search paths in agentDirs. #-f means the agent file #-i means entering the interactive mode $ai run -if examples/char-dobby Dobby: I am Dobby. Dobby is happy. You: intro yourself pls. Dobby: I am Dobby. I'm a brave and loyal house-elf, and I'm very proud to be a free elf. I love socks and wearing mismatched pairs.
#provide the content and the json schema in output field, it will output the json data. $ai run -f examples/json '{content: "I recently purchased the Razer BlackShark V2 X Gaming Headset, and it has significantly enhanced my gaming experience. This headset offers incredible sound quality, comfort, and features that are perfect for any serious gamer. Hereβs why I highly recommend it: The 7.1 surround sound feature is a game-changer. The audio quality is superb, providing a truly immersive experience. I can clearly hear directional sounds, which is crucial for competitive gaming. The depth and clarity of the sound make it feel like Iβm right in the middle of the action. The 50mm drivers deliver powerful, high-quality sound. The bass is deep and punchy without being overwhelming, while the mids and highs are crisp and clear. This balance makes the headset versatile, not only for gaming but also for listening to music and watching movies.", "output":{"type":"object","properties":{"sentiment":{"type":"string","description":"Sentiment (positive or negative)"},"products":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string","description":"Name of the product"},"brand":{"type":"string","description":"Company that made the product"}}},"description":"Products mentioned in the review"},"anger":{"type":"boolean","description":"Is the reviewer expressing anger?"}},"required":["sentiment","products","anger"]}}'
* By default, the history after running is in the directory
~/.local/share/ai/logs/chats/[script_file_basename]/history. You can check seeds, temperature and other information here. * In interactive mode, the history will be automatically loaded by default. If you don't need it, you can use --new-chat * In non-interactive mode, the history will not be automatically loaded. A new history will be generated for each run. * To completely disable the history, you can use --no-chats
Embed the script into your own code (locally) as follows:
`ts import { AIScriptServer } from '@isdk/ai-tool-agent';
// Configure your script search path AIScriptEx.searchPaths = ['.'] const script = AIScriptServer.load('examples/json') // Set the default to large model streaming response script.llmStream = stream
const content = "I recently purchased the Razer BlackShark V2 X Gaming Headset, and it has significantly enhanced my gaming experience. This headset offers incredible sound quality, comfort, and features that are perfect for any serious gamer. Hereβs why I highly recommend it: The 7.1 surround sound feature is a game-changer. The audio quality is superb, providing a truly immersive experience. I can clearly hear directional sounds, which is crucial for competitive gaming. The depth and clarity of the sound make it feel like Iβm right in the middle of the action. The 50mm drivers deliver powerful, high-quality sound. The bass is deep and punchy without being overwhelming, while the mids and highs are crisp and clear. This balance makes the headset versatile, not only for gaming but also for listening to music and watching movies." const output = { "type":"object", "properties":{ "sentiment":{"type":"string","description":"Sentiment (positive or negative)"}, "products":{ "type":"array", "items":{ "type":"object", "properties":{ "name":{"type":"string","description":"Name of the product"}, "brand":{"type":"string","description":"Company that made the product"}} }, "description":"Products mentioned in the review" }, "anger":{"type":"boolean","description":"Is the reviewer expressing anger?"}}, "required":["sentiment","products","anger"] }
const result =await script.exec({content, output}) console.log(result) // You can see the json results output by the large model: { "sentiment": "positive", "products": [ { "name": "Razer BlackShark V2 X Gaming Headset", "brand": "Razer" } ], "anger": false }
FLAGS -b, --brainDir= the brains(LLM) directory -n, --count= [default: 100] the max number of brains to list, 0 means all. -r, --refresh refresh the online brains list -s, --search= the json filter to search for brains -u, --hubUrl= the hub mirror url -v, --verifyQuant whether verify quant when refresh --[no-]banner show banner --config= the config file
GLOBAL FLAGS --json Format output as json.
DESCRIPTION π§ The AI Brains(LLM) Manager.
Manage AI brains π§ here. π List downloaded or online brains π search for brains π₯ download brains β delete brains
EXAMPLES $ ai brain # list download brains $ ai brain list --online # list online brains $ ai brain download
FLAGS -a, --all list all brains(include downloaded) -b, --brainDir= the brains(LLM) directory -d, --downloaded list downloaded brains -f, --[no-]onlyFeatured only list featured brains -n, --count= [default: 100] the max number of brains to list, 0 means all. -r, --refresh refresh the online brains list -s, --search= the json filter to search for brains -u, --hubUrl= the hub mirror url --[no-]banner show banner --config= the config file
FLAGS -b, --brainDir= the brains(LLM) directory -c, --maxCount= [default: -1] the max number of brains to refresh, -1 means no limits -u, --hubUrl= the hub mirror url -v, --verifyQuant whether verify quant when refresh
FLAGS -a, --[no-]all list all brains(include downloaded) -b, --brainDir= the brains(LLM) directory -d, --downloaded list downloaded brains -f, --[no-]onlyFeatured only list featured brains -n, --count= [default: 100] the max number of brains to list, 0 means all. -r, --refresh refresh the online brains list -s, --search= the json filter to search for brains -u, --hubUrl= the hub mirror url --[no-]banner show banner --config= the config file
ARGUMENTS ITEM_NAME the config item name path to get
FLAGS -A, --aiPreferredLanguage= the ISO 639-1 code for the AI preferred language to translate the user input automatically, eg, en, etc. -C, --streamEchoChars= [default: 80] stream echo max characters limit -D, --data=... the data which will be passed to the ai-agent script: key1=value1 key2=value2 -L, --userPreferredLanguage= the ISO 639-1 code for the user preferred language to translate the AI result automatically, eg, en, zh, ja, ko, etc. -P, --provider= the LLM provider, defaults to llamacpp -a, --arguments= the json data which will be passed to the ai-agent script -b, --brainDir= the brains(LLM) directory -d, --dataFile= the data file which will be passed to the ai-agent script -e, --streamEcho=
GLOBAL FLAGS --json Format output as json.
DESCRIPTION π οΈ Manage the AI Configuration.
show current configuration if no commands.
EXAMPLES # list all configurations $ ai config
# get the brainDir config item $ ai config brainDir AI Configuration: { "brainDir": "~/.local/share/ai/brain" }
ARGUMENTS DATA the json data which will be passed to the ai-agent script
FLAGS -A, --aiPreferredLanguage= the ISO 639-1 code for the AI preferred language to translate the user input automatically, eg, en, etc. -C, --streamEchoChars= [default: 80] stream echo max characters limit -D, --data=... the data which will be passed to the ai-agent script: key1=value1 key2=value2 -L, --userPreferredLanguage= the ISO 639-1 code for the user preferred language to translate the AI result automatically, eg, en, zh, ja, ko, etc. -P, --provider= the LLM provider, defaults to llamacpp -a, --arguments= the json data which will be passed to the ai-agent script -b, --brainDir= the brains(LLM) directory -d, --dataFile= the data file which will be passed to the ai-agent script -e, --streamEcho=
[default: line] stream echo mode -f, --script= the ai-agent script file name or id -i, --[no-]interactive interactive mode -k, --backupChat whether to backup chat history before start, defaults to false -l, --logLevel=
the log level -m, --[no-]stream stream mode, defaults to true -n, --[no-]newChat whether to start a new chat history, defaults to false in interactive mode, true in non-interactive -p, --promptDirs=... the prompts template directory -s, --agentDirs=... the search paths for ai-agent script file -t, --inputs= the input histories folder for interactive mode to record -u, --api= the api URL --apiKey= the api key (optional) --[no-]banner show banner --config= the config file --histories= the chat histories folder to record --logLevelMaxLen= the max length of log item to display --no-chats disable chat histories, defaults to false --no-inputs disable input histories, defaults to false
FLAGS -f, --force Force npm to fetch remote resources even if a local copy exists on disk. -h, --help Show CLI help. -s, --silent Silences npm output. -v, --verbose Show verbose npm output.
GLOBAL FLAGS --json Format output as json.
DESCRIPTION Installs a plugin into ai.
Uses npm to install plugins.
Installation of a user-installed plugin will override a core plugin.
Use the AI_NPM_LOG_LEVEL environment variable to set the npm loglevel. Use the AI_NPM_REGISTRY environment variable to set the npm registry.
ALIASES $ ai plugins add
EXAMPLES Install a plugin from npm registry.
$ ai plugins add myplugin
Install a plugin from a github url.
$ ai plugins add https://github.com/someuser/someplugin
Install a plugin from a github slug.
$ ai plugins add someuser/someplugin
`ai plugins:inspect PLUGIN...
Displays installation properties of a plugin.
` USAGE $ ai plugins inspect PLUGIN...
ARGUMENTS PLUGIN... [default: .] Plugin to inspect.
FLAGS -h, --help Show CLI help. -v, --verbose
GLOBAL FLAGS --json Format output as json.
DESCRIPTION Displays installation properties of a plugin.
FLAGS -f, --force Force npm to fetch remote resources even if a local copy exists on disk. -h, --help Show CLI help. -s, --silent Silences npm output. -v, --verbose Show verbose npm output.
GLOBAL FLAGS --json Format output as json.
DESCRIPTION Installs a plugin into ai.
Uses npm to install plugins.
Installation of a user-installed plugin will override a core plugin.
Use the AI_NPM_LOG_LEVEL environment variable to set the npm loglevel. Use the AI_NPM_REGISTRY environment variable to set the npm registry.
ALIASES $ ai plugins add
EXAMPLES Install a plugin from npm registry.
$ ai plugins install myplugin
Install a plugin from a github url.
$ ai plugins install https://github.com/someuser/someplugin
` USAGE $ ai plugins link PATH [-h] [--install] [-v]
ARGUMENTS PATH [default: .] path to plugin
FLAGS -h, --help Show CLI help. -v, --verbose --[no-]install Install dependencies after linking the plugin.
DESCRIPTION Links a plugin into the CLI for development.
Installation of a linked plugin will override a user-installed or core plugin.
e.g. If you have a user-installed or core plugin that has a 'hello' command, installing a linked plugin with a 'hello' command will override the user-installed or core plugin implementation. This is useful for development work.
FLAGS --hard Delete node_modules and package manager related files in addition to uninstalling plugins. --reinstall Reinstall all plugins after uninstalling.
ARGUMENTS FILE the script file path, or the json data when
-f switch is set DATA the json data which will be passed to the ai-agent script
FLAGS -A, --aiPreferredLanguage= the ISO 639-1 code for the AI preferred language to translate the user input automatically, eg, en, etc. -C, --streamEchoChars= [default: 80] stream echo max characters limit -D, --data=... the data which will be passed to the ai-agent script: key1=value1 key2=value2 -L, --userPreferredLanguage= the ISO 639-1 code for the user preferred language to translate the AI result automatically, eg, en, zh, ja, ko, etc. -P, --provider= the LLM provider, defaults to llamacpp -a, --arguments= the json data which will be passed to the ai-agent script -b, --brainDir= the brains(LLM) directory -d, --dataFile= the data file which will be passed to the ai-agent script -e, --streamEcho=
[default: line] stream echo mode -f, --script= the ai-agent script file name or id -i, --[no-]interactive interactive mode -k, --backupChat whether to backup chat history before start, defaults to false -l, --logLevel=
the log level -m, --[no-]stream stream mode, defaults to true -n, --[no-]newChat whether to start a new chat history, defaults to false in interactive mode, true in non-interactive -p, --promptDirs=... the prompts template directory -s, --agentDirs=... the search paths for ai-agent script file -t, --inputs= the input histories folder for interactive mode to record -u, --api= the api URL --apiKey= the api key (optional) --[no-]banner show banner --config= the config file --[no-]consoleClear Whether console clear after stream echo output, default to true --histories= the chat histories folder to record --logLevelMaxLen= the max length of log item to display --no-chats disable chat histories, defaults to false --no-inputs disable input histories, defaults to false
GLOBAL FLAGS --json Format output as json.
DESCRIPTION π» Run ai-agent script file.
Execute ai-agent script file and return result. with
-i to interactive.
EXAMPLES $ ai run -f ./script.yaml "{content: 'hello world'}" -l info βββββββββββββββββββββ β[info]:Start Script: ...
FLAGS -A, --aiPreferredLanguage= the ISO 639-1 code for the AI preferred language to translate the user input automatically, eg, en, etc. -D, --data=... the data which will be passed to the ai-agent script: key1=value1 key2=value2 -L, --userPreferredLanguage= the ISO 639-1 code for the user preferred language to translate the AI result automatically, eg, en, zh, ja, ko, etc. -P, --provider= the LLM provider, defaults to llamacpp -a, --arguments= the json data which will be passed to the ai-agent script -b, --brainDir= the brains(LLM) directory -c, --runCount= [default: 1] The number of times to run the test case to check if the results are consistent with the previous run, and to record the counts of matching and non-matching results -d, --dataFile= the data file which will be passed to the ai-agent script -e, --streamEcho=
[default: line] stream echo mode, defaults to true -e, --streamEchoChars= [default: 80] stream echo max characters limit, defaults to no limit -f, --script= the ai-agent script file name or id -g, --generateOutput generate output to fixture file if no output is provided -i, --includeIndex=... the index of the fixture to run -k, --backupChat whether to backup chat history before start, defaults to false -l, --logLevel=
the log level -m, --[no-]stream stream mode, defaults to true -n, --[no-]newChat whether to start a new chat history, defaults to false in interactive mode, true in non-interactive -p, --promptDirs=... the prompts template directory -s, --agentDirs=... the search paths for ai-agent script file -t, --inputs= the input histories folder for interactive mode to record -u, --api= the api URL -x, --excludeIndex=... the index of the fixture to exclude from running --apiKey= the api key (optional) --[no-]banner show banner --[no-]checkSchema Whether check JSON schema of output --config= the config file --[no-]consoleClear Whether console clear after stream output, default to true in interactive, false to non-interactive --histories= the chat histories folder to record --logLevelMaxLen= the max length of log item to display --no-chats disable chat histories, defaults to false --no-inputs disable input histories, defaults to false
GLOBAL FLAGS --json Format output as json.
DESCRIPTION π¬ Run simple AI fixtures to test(draft).
Execute fixtures file to test AI script file and check result.