Showing 41-60 of 169 packages
llama.cpp LLM local Provider
React Native binding of llama.cpp
React Native library for on-device LLM inference using llama.cpp. Part of Novastera CRM/ERP platform ecosystem.
A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAi.
Prebuilt binary for node-llama-cpp for Linux x64
Prebuilt binary for node-llama-cpp for Linux arm64
Prebuilt binary for node-llama-cpp for Linux armv7l
Low-level Node.js bindings for llama.cpp. Core library for running LLM models locally with native performance and hardware acceleration support.
Capacitor binding of llama.cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
Unified AI inference layer - Local (Ollama/llama.cpp) + BYOK Cloud Providers
Prebuilt binary for node-llama-cpp for Windows x64
Node.js bindings for LlamaCPP, a C++ library for running language models.
Prebuilt binary for node-llama-cpp for Windows arm64
Prebuilt binary for node-llama-cpp for Linux x64 with Vulkan support
Prebuilt binary for node-llama-cpp for Linux x64 with CUDA support
React Native binding of llama.cpp for Inferra
Prebuilt binary for node-llama-cpp for macOS arm64 with Metal support
Prebuilt binary for node-llama-cpp for Windows x64 with Vulkan support
Prebuilt binary for node-llama-cpp for Windows x64 with CUDA support