Showing 21-40 of 92 packages
Standalone Node.js module for running LLMs locally - no external dependencies
A minimal llama.cpp provider for the Vercel AI SDK implementing LanguageModelV3 and EmbeddingModelV3
Agent memory CLI using markdown + local embeddings + SQLite
Electron-specific library for managing local AI model servers and resources
A minimal llama.cpp provider for the Vercel AI SDK implementing LanguageModelV3 and EmbeddingModelV3
Labs SDK - Clean tool functions for file, exec, search, git, memory operations
A native Capacitor plugin that embeds llama.cpp directly into mobile apps, enabling offline AI inference with chat-first API design. Complete iOS and Android support: text generation, chat, multimodal, TTS, LoRA, embeddings, and more.
An another Node binding of llama.cpp
Your Offline(local) AI agent client for Programable Prompt Engine
Core SDK for RunAnywhere React Native - includes RACommons bindings, native bridges, and public API
HTML to Markdown converter
MCP server bridging Claude Code to local llama.cpp
Cortex is an openAI-compatible local AI server that developers can use to build LLM apps. It is packaged with a Docker-inspired command-line interface and a Typescript client library. It can be used as a standalone server, or imported as a library.
React Native binding of llama.cpp
Auxot GPU worker CLI - connects local GPU resources to Auxot platform
CLI for LLM inference, benchmarking, and model management - run local LLMs with Metal/CUDA acceleration
React Native binding of llama.cpp
A tool for managing and running multiple LLM instances
BeeBee TINY agent LLM service using node-llama-cpp