Showing 1-20 of 169 packages
xgvcvcvxcv
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
React Native binding of llama.cpp
The repo is for one of the backend: [llama.cpp](https://github.com/ggerganov/llama.cpp)
React Native binding of llama.cpp
React Native binding of llama.cpp
An another Node binding of llama.cpp
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
llama.cpp gguf file parser for javascript
CLI tool to manage local llama.cpp servers on macOS
A minimal llama.cpp provider for the Vercel AI SDK implementing LanguageModelV3 and EmbeddingModelV3
Node.js client for liblloyal+llama.cpp
A native Capacitor plugin that embeds llama.cpp directly into mobile apps, enabling offline AI inference with chat-first API design. Complete iOS and Android support: text generation, chat, multimodal, TTS, LoRA, embeddings, and more.
Native module for An another Node binding of llama.cpp (linux-x64)
A minimal llama.cpp provider for the Vercel AI SDK implementing LanguageModelV3 and EmbeddingModelV3
Mobile web app that captures photos and extracts text using a local llama.cpp LLM server
llama.cpp LLM Provider
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference (custom version from aviallon)
Native module for An another Node binding of llama.cpp (linux-x64-cuda)
OpenCode plugin for enhanced llama.cpp support with auto-detection and dynamic model discovery