There are 3 repositories under guardrails topic.
The AI framework that adds the engineering to prompt engineering (Python/TS/Ruby/Java/C#/Rust/Go compatible)
An open-source framework for detecting, redacting, masking, and anonymizing sensitive data (PII) across text, images, and structured data. Supports NLP, pattern matching, and customizable pipelines.
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Building blocks for rapid development of GenAI applications
Fastest LLM gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 µs overhead at 5k RPS.
⚕️GenAI powered multi-agentic medical diagnostics and healthcare research assistance chatbot. 🏥 Designed for healthcare professionals, researchers and patients.
A curated list of blogs, videos, tutorials, code, tools, scripts, and anything useful to help you learn Azure Policy - by @JesseLoudon
PAIG (Pronounced similar to paige or payj) is an open-source project designed to protect Generative AI (GenAI) applications by ensuring security, safety, and observability.
Real-time guardrail that shows token spend & kills runaway LLM/agent loops.
Open-source MCP gateway and control plane for teams to govern which tools agents can use, what they can do, and how it’s audited—across agentic IDEs like Cursor, or other agents and AI tools.
ChatGPT API Usage using LangChain, LlamaIndex, Guardrails, AutoGPT and more
Framework for LLM evaluation, guardrails and security
OpenGuardrails: Developer-First Open-Source AI Security Platform - Comprehensive Security Protection for AI Applications
LangEvals aggregates various language model evaluators into a single platform, providing a standard interface for a multitude of scores and LLM guardrails, for you to protect and benchmark your LLM models and pipelines.
Make AI work for Everyone - Monitoring and governing for your AI/ML
LLM proxy to observe and debug what your AI agents are doing.
Xiangxin Guardrails is an open-source, context-aware AI guardrails platform that provides protection against prompt injection attacks, content safety risks, and data leakage. It can be deployed as a security gateway or integrated via API, offering enterprise-grade, fully private deployment options.
First-of-its-kind AI benchmark for evaluating the protection capabilities of large language model (LLM) guard systems (guardrails and safeguards)
Open-source toolkit for responsible AI: CLI + SDK to scan code, collect evidence, and generate model cards, risk files, evals, and RAG indexes.
A curated list of materials on AI guardrails
Trustworthy question-answering AI plugin for chatbots in the social sector with advanced content performance analysis.
Learn how to create an AI Agent with Django, LangGraph, and Permit.
Awesome AWS service control policies (SCPs), Resource Control Policies (RCPs), and other organizational policies
A TypeScript library providing a set of guards for LLM (Large Language Model) applications
A Python library for guardrail models evaluation.
Layered guardrails to make agentic AI safer and more reliable.
Agentic Github Issues Retrieval on Kubernetes
Middleware for the Vercel AI SDK that adds safety, quality control, and cost management to your AI applications by intercepting prompts and responses.
TurboFuzzLLM: Turbocharging Mutation-based Fuzzing for Effectively Jailbreaking Large Language Models in Practice
Client-side retrieval firewall for RAG systems — blocks prompt injection and secret leaks, re-ranks stale or untrusted content, and keeps all data inside your environment.
💂🏼 Build your Documentation AI with Nemo Guardrails
This is a RAG based chatbot in which semantic cache and guardrails have been incorporated.
This repo hosts the Python SDK and related examples for AIMon, which is a proprietary, state-of-the-art system for detecting LLM quality issues such as Hallucinations. It can be used during offline evals, continuous monitoring or inline detection. We offer various model quality metrics that are fast, reliable and cost-effective.
Securing Agentic AI Developer Day shows developers how to take an agentic AI reference workflow to production securely.
LLM-as-a-Judge security layer for Microsoft Copilot Studio agents
We compared LangChain, Fixie, and Marvin