Showing 21-40 of 2,082 packages
Prebuilt binary for node-llama-cpp for Windows arm64
Prebuilt binary for node-llama-cpp for macOS x64
Prebuilt binary for node-llama-cpp for Linux x64 with Vulkan support
A native Capacitor plugin that embeds llama.cpp directly into mobile apps, enabling offline AI inference with chat-first API design. Complete iOS and Android support: text generation, chat, multimodal, TTS, LoRA, embeddings, and more.
Prebuilt binary for node-llama-cpp for Linux armv7l
Prebuilt binary for node-llama-cpp for Windows x64 with Vulkan support
Prebuilt binary for node-llama-cpp for Linux x64
Scaffold a new node-llama-cpp project from a template
Prebuilt binary for node-llama-cpp for Windows x64 with CUDA support
Prebuilt binary for node-llama-cpp for macOS arm64 with Metal support
The repo is for one of the backend: [llama.cpp](https://github.com/ggerganov/llama.cpp)
A minimal llama.cpp provider for the Vercel AI SDK implementing LanguageModelV3 and EmbeddingModelV3
BeeBee TINY agent LLM service using node-llama-cpp
Prebuilt binary for node-llama-cpp for Linux x64 with CUDA support
An OpenAI and Claude API compatible server using node-llama-cpp for local LLM models
Talk to local llama.cpp server via chat completion API (plain text, per-user memory).
A minimal llama.cpp provider for the Vercel AI SDK implementing LanguageModelV3 and EmbeddingModelV3
Talk to local llama.cpp server via chat completion API (plain text, per-user memory).
Llama.cpp provider for the Vercel AI SDK
C/C++ Dictionary dictionary for cspell.