Showing 1-20 of 179,782 packages
Large language model utilities
The Universal Translation Layer for Large Language Model APIs
Node.js Library for Large Language Model LLaMA/RWKV
A high-quality PDF to Markdown tool based on large language model visual recognition.
Let the large language model play a role, disguise as a group friend
Estimate USD costs for Large Language Model API calls with dynamic pricing fetching
This package provides an SDK for utilizing LangChain-based LLM (Large Language Model) functionality in applications developed on the Jeliq platform.
Official Typescript SDK for airouter.io - Automatically route requests to the best Large Language Model (LLM).
Large language models and functionality for Saltcorn
Translation services powered by the large language model in chatluna
Core library providing robust AI engineering functionalities tailored for Large Language Model (LLM) applications, enabling developers to build, deploy, and optimize AI solutions with ease.
Yinglish translator powered by the large language model in chatluna
> Let the Gemini CLI access any large language model provider
Smash or Pass AI powered by the large language model in chatluna
포트원 사용자를 위한 MCP (Model Context Protocol) 서버입니다. 포트원 개발자센터, 헬프센터 등 공식 문서 내용을 LLM(Large Language Model)에 제공해 정확한 정보를 바탕으로 사용자의 연동 및 질의를 돕도록 합니다.
A simple, unified NPM-based interface for interacting with multiple Large Language Model (LLM) APIs, including OpenAI, AI21 Studio, Anthropic, Cloudflare AI, Cohere, Fireworks AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Perplexity, Reka A
포트원 사용자를 위한 MCP (Model Context Protocol) 서버입니다. 포트원 개발자센터, 헬프센터 등 공식 문서 내용을 LLM(Large Language Model)에 제공해 정확한 정보를 바탕으로 사용자의 연동 및 질의를 돕도록 합니다.
A **Model Context Protocol (MCP) server** that connects _Squad_ — the AI‑powered product‑discovery and strategy platform — to any MCP‑aware large‑language‑model (LLM) application. It exposes a rich tool‑kit for creating, querying and updating product‑stra
A multiplexer for Large Language Model APIs built on the OpenAI SDK. It combines quotas from multiple models and automatically uses fallback models when the primary models are rate limited.