Node.js Library for Large Language Model LLaMA/RWKV
npm install llama-nodellama-node: Node.js Library for Large Language Model

Picture generated by stable diffusion.
---
- LLaMA Node
- Introduction
- Supported models
- Supported platforms
- Installation
- Manual compilation
- CUDA support
- Acknowledgments
- Models/Inferencing tools dependencies
- Some source code comes from
- Community
---
This project is in an early stage and is not production ready, we do not follow the semantic versioning. The API for nodejs may change in the future, use it with caution.
This is a nodejs library for inferencing llama, rwkv or llama derived models. It was built on top of llm (originally llama-rs), llama.cpp and rwkv.cpp. It uses napi-rs for channel messages between node.js and llama thread.
llama.cpp backend supported models (in GGML format):
- LLaMA 🦙
- Alpaca
- GPT4All
- Chinese LLaMA / Alpaca
- Vigogne (French)
- Vicuna
- Koala
- OpenBuddy 🐶 (Multilingual)
- Pygmalion 7B / Metharme 7B
llm(llama-rs) backend supported models (in GGML format):
- GPT-2
- GPT-J
- LLaMA: LLaMA,
Alpaca, Vicuna, Koala, GPT4All v1, GPT4-X, Wizard
- GPT-NeoX:
GPT-NeoX, StableLM, RedPajama, Dolly v2
- BLOOM: BLOOMZ
rwkv.cpp backend supported models (in GGML format):
- RWKV
- darwin-x64
- darwin-arm64
- linux-x64-gnu (glibc >= 2.31)
- linux-x64-musl
- win32-x64-msvc
Node.js version: >= 16
---
- Install llama-node npm package
``bash`
npm install llama-node
- Install anyone of the inference backends (at least one)
- llama.cpp
`bash`
npm install @llama-node/llama-cpp
- or llm
`bash`
npm install @llama-node/core
- or rwkv.cpp
`bash``
npm install @llama-node/rwkv-cpp
---
Please see how to start with manual compilation on our contribution guide
---
Please read the document on our site to get started with manual compilation related to CUDA support
---
This library was published under MIT/Apache-2.0 license. However, we strongly recommend you to cite our work/our dependencies work if you wish to reuse the code from this library.
- LLaMA models: facebookresearch/llama
- RWKV models: BlinkDL/RWKV-LM
- llama.cpp: ggreganov/llama.cpp
- llm: rustformers/llm
- rwkv.cpp: saharNooby/rwkv.cpp
- llama-cpp bindings: sobelio/llm-chain
- rwkv logits sampling: KerfuffleV2/smolrsrwkv
---
Join our Discord community now! Click to join llama-node Discord