Showing 1-8 of 8 packages
Whitecircle.ai utilities for aligning model behavior with stated policies.
Whitecircle.ai components for LLM-as-a-judge scoring and rubric-based evals.
Whitecircle.ai helpers for securing AI pipelines: policy checks, secrets hygiene.
Whitecircle.ai red teaming helpers for probing LLM weaknesses safely.
Whitecircle.ai tools for judge-style evaluations and rubric scoring.
Whitecircle.ai hooks for monitoring prompts, responses, and model health.
Whitecircle.ai observability primitives for tracing, logging, and audit trails.
Whitecircle.ai moderation helpers for content safety and policy enforcement.