XML AI is the fastest and most ergonomic way to get structured input and output out of your large language model.
npm install xmlaiXML AI is the fastest and most ergonomic way to get structured input and output out of your large language model.
Just write your prompt in JSON and we'll autoconvert it to XML that the model can work with. The best part? You can stream the response back as a JSON object in real time. No more sacrificing streaming for function calling or schema following capabilities. Built and optimized for Anthropic's Claude models, but also including OpenAI support.
The library is designed to be as lightweight as possible, and with nearly identical APIs whether you are operating in Python or Javascript/Typescript.
bash
pip install xmlai
`$3
`python
from xmlai.llm import anthropic_promptprompt = anthropic_prompt(
{
"question": "what is the answer to the ultimate question of life?",
"reference": "The Hitchhiker's Guide to the Galaxy",
},
response_root_tag="answer",
)
completion = anthropic.completions.create(
model="claude-instant-1",
max_tokens_to_sample=300,
temperature=0.1,
**prompt,
)
completion.completion # 42
`Typescript
$3
`bash
pnpm install xmlai
`$3
`typescript
import { anthropic_prompt } from "xmlai/llm";const prompt = anthropic_prompt(
{
question: "what is the answer to the ultimate question of life?",
reference: "The Hitchhiker's Guide to the Galaxy",
},
response_root_tag="answer"
);
const completion = await anthropic.completions.create({
model: "claude-instant-1",
max_tokens_to_sample: 300,
temperature: 0.1,
...prompt,
});
completion.completion // 42
`The generated prompts look like this:
`json
{
"prompt":"\n\nHuman:what is the answer to the ultimate question of life?
The Hitchhiker's Guide to the Galaxy \n\nAssistant:",
"stop_sequences":[
" "
]
}
``Note that we feed the opening tag to the beginning of the assistant's response! This combined with the closing tag as the stop token almost always ensures that the response is valid XML.
Also, the regex for dealing with XML streams is surprisingly grotesque. I figured I'd limit the monstrosity to one codebase where it can be tested and maintained.