Talk to local llama.cpp server via chat completion API (plain text, per-user memory).
npm install koishi-plugin-llama-cpp

Talk to local llama.cpp server via completion api.