Talk to local llama.cpp server via chat completion API (plain text, per-user memory).
npm install @cherry_sigma/koishi-plugin-llama-cpp

Talk to local llama.cpp server via completion api.