Showing 1-20 of 110 packages
Reusable evaluators for AI evaluation frameworks
Core runner, evaluators, and storage for ArtemisKit LLM evaluation toolkit
Shape Expressions triple expression evaluator api - defines how @shexjs/validator invokes regex evaluators.
<video width="100%" controls> <source src="https://static-docs.nocobase.com/NocoBase0510.mp4" type="video/mp4"> </video>
CLI for interacting with the Google Genkit AI framework
Genkit AI framework plugin for RAG evaluation.
Much like tests in traditional software, evals are an important part of bringing LLM applications to production. The goal of this package is to help provide a starting point for you to write evals for your LLM applications, from which you can write more c
TypeScript definitions for multisort
CPMS core (pure JS): evaluators + hybrid scoring + explain traces + naive pattern matching.
This is a collection of rules, rule-evaluators, and tests for semantic validation of Versa receipts. Written in Rust, it uses [napi-rs](https://napi.rs/) to compile to native modules for use in NodeJS environments. It can also be used in Rust backends, an
A extend query-operation actor
An OIDC authentication module for NestJS APIs
A group query-operation actor
Async versions of various highly composable transducers, reducers and iterators
Generic class to process and serialise universal Turing machines and evaluators
Exposes typescript compiler (tsc) as a node.js module
The `@elizaos/core` package provides a robust foundation for building AI agents with dynamic interaction capabilities. It enables agents to manage entities, memories, and context, and to interact with external systems, going beyond simple message response
Various evaluators for the smartshop platform.
[Agentic applications](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/) give an LLM freedom over control flow in order to solve problems. While this freedom can be extremely powerful, the black box nature of LLMs can make it difficult