npm explorer

Results for "Local LLM"

/

Showing 1-20 of 102,731 packages

mcp-local-llm

v1.0.1

MCP server that uses a local LLM to respond to queries - Binary distribution

mcpllmlocalaibinary
2 weeks ago0/week
Quality100%
Popularity100%
Maintenance100%

@relayplane/proxy

v1.1.0

RelayPlane Local LLM Proxy - Route requests through multiple providers

proxyllmopenaianthropic

moth-ai

v1.0.4

Local, LLM-agnostic code intelligence CLI

clillmaicode-assistant

@contextaisdk/provider-ollama

v0.1.0

Ollama local LLM provider for ContextAI SDK

contextaiollamallmai

@tryhamster/gerbil

v1.0.0-rc.0

Local LLM inference for Node.js. GPU-accelerated. Zero config. Works standalone or with Vercel AI SDK.

llmlocalgpuwebgpu

blax

v3.0.5

Blax - HMS-Powered Multi-Agent Platform with Government Agency Analysis, Deep Research, and Enterprise-Ready Deployment. No local LLM keys required.

tlnthmsmulti-agentcollaboration

@lebiraja/plugintool

v1.0.0

Modern local LLM chat interface with Apple-inspired UI - React components for building AI chat applications

llmchatreactai

yak-llm

v1.2.0

Minimalist local LLM chat interface using Ollama

ollamallmchatlocal

@llmrtc/llmrtc-provider-lmstudio

v1.0.0

LMStudio local LLM adapter for @llmrtc/LLMRTC

1 months ago0/week
Quality100%
Popularity100%
Maintenance100%

@martin-pi/sails-hook-llm

v1.0.2

Local LLM hook for Sails

sailssailsjsllmhook

@tetherai/local

v0.4.1

Local LLM provider for TetherAI (streaming-first + middleware).

ailocalllmollama

promptfoo

v0.120.23

LLM eval & testing toolkit

2 days ago0/week
Quality100%
Popularity100%
Maintenance100%

envirollm

v1.2.0

CLI tool for monitoring local LLM resource usage

llmmonitoringenergyoptimization

llm-complete

v1.0.2

A command-line tool for generating text completions using local LLM models with GPT4All

llmcompletiontextgpt4all

apple-local-llm

v1.0.0

Call Apple's on-device Foundation Models — no servers, no setup.

applellmfoundation-modelslocal

ai-server

v2.0.1

An OpenAI and Claude API compatible server using node-llama-cpp for local LLM models

aiai serverllmopenai

llm-threader

v1.1.0

Intelligent thread pooling and scaling for local LLM requests based on CPU and GPU usage

llmthreadingscalingcpu

evalite

v0.19.0

Test your LLM-powered apps with a TypeScript-native, Vitest-based eval runner. No API key required.

aievalstypescriptvitest

@aayvyas/gmail-classifier

v0.0.3

Privacy-focused Gmail classifier using local LLM - automatically organize your inbox with AI

gmailemailclassifierai

langchain

v1.2.18

Typescript bindings for langchain

llmaigpt3chain
Page 1 of 5137
Next
ai
+1 more
today0/week
Quality100%
Popularity100%
Maintenance100%
ollama
+1 more
1 months ago0/week
Quality100%
Popularity100%
Maintenance100%
provider
+1 more
2 weeks ago0/week
Quality100%
Popularity100%
Maintenance100%
inference
+9 more
2 months ago0/week
Quality100%
Popularity100%
Maintenance100%
government
+26 more
7 months ago0/week
Quality100%
Popularity100%
Maintenance100%
ollama
+4 more
2 weeks ago0/week
Quality100%
Popularity100%
Maintenance100%
ai
+2 more
5 months ago0/week
Quality100%
Popularity100%
Maintenance100%
llama
+1 more
4 months ago0/week
Quality100%
Popularity100%
Maintenance100%
lm-studio
+3 more
5 months ago0/week
Quality100%
Popularity100%
Maintenance100%
gpu
+1 more
2 months ago0/week
Quality100%
Popularity100%
Maintenance100%
10 months ago0/week
Quality100%
Popularity100%
Maintenance100%
on-device
+2 more
1 months ago0/week
Quality100%
Popularity100%
Maintenance100%
api
+5 more
10 months ago0/week
Quality100%
Popularity100%
Maintenance100%
gpu
+5 more
2 months ago0/week
Quality100%
Popularity100%
Maintenance100%
3 months ago0/week
Quality100%
Popularity100%
Maintenance100%
llm
+8 more
2 months ago0/week
Quality100%
Popularity100%
Maintenance100%
prompt
+7 more
3 days ago0/week
Quality100%
Popularity100%
Maintenance100%