Showing 1-20 of 102,731 packages
MCP server that uses a local LLM to respond to queries - Binary distribution
RelayPlane Local LLM Proxy - Route requests through multiple providers
Local, LLM-agnostic code intelligence CLI
Ollama local LLM provider for ContextAI SDK
Local LLM inference for Node.js. GPU-accelerated. Zero config. Works standalone or with Vercel AI SDK.
Blax - HMS-Powered Multi-Agent Platform with Government Agency Analysis, Deep Research, and Enterprise-Ready Deployment. No local LLM keys required.
Modern local LLM chat interface with Apple-inspired UI - React components for building AI chat applications
Minimalist local LLM chat interface using Ollama
LMStudio local LLM adapter for @llmrtc/LLMRTC
Local LLM hook for Sails
Local LLM provider for TetherAI (streaming-first + middleware).
LLM eval & testing toolkit
CLI tool for monitoring local LLM resource usage
A command-line tool for generating text completions using local LLM models with GPT4All
Call Apple's on-device Foundation Models — no servers, no setup.
An OpenAI and Claude API compatible server using node-llama-cpp for local LLM models
Intelligent thread pooling and scaling for local LLM requests based on CPU and GPU usage
Test your LLM-powered apps with a TypeScript-native, Vitest-based eval runner. No API key required.
Privacy-focused Gmail classifier using local LLM - automatically organize your inbox with AI
Typescript bindings for langchain