si3mshady / llm-guard

The Security Toolkit for LLM Interactions

Home Page:https://laiyer.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LLM Guard - The Security Toolkit for LLM Interactions

LLM Guard by Laiyer.ai is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).

Documentation | Demo | Changelog

MIT license Code style: black PyPI - Python Version Downloads Downloads Twitter

Production Support / Help for companies

We're eager to provide personalized assistance when deploying your LLM Guard to a production environment.

What is LLM Guard?

LLM-Guard

By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.

Installation

Begin your journey with LLM Guard by downloading the package:

pip install llm-guard

Getting Started

Important Notes:

  • LLM Guard is designed for easy integration and deployment in production environments. While it's ready to use out-of-the-box, please be informed that we're constantly improving and updating the repository.
  • Base functionality requires a limited number of libraries. As you explore more advanced features, necessary libraries will be automatically installed.
  • Ensure you're using Python version 3.8.1 or higher. Confirm with: python --version.
  • Library installation issues? Consider upgrading pip: python -m pip install --upgrade pip.

Examples:

Supported scanners

Prompt scanners

Output scanners

Roadmap

You can find our roadmap here. Please don't hesitate to contribute or create issues, it helps us improve LLM Guard!

Contributing

Got ideas, feedback, or wish to contribute? We'd love to hear from you! Email us.

For detailed guidelines on contributions, kindly refer to our contribution guide.

About

The Security Toolkit for LLM Interactions

https://laiyer.ai

License:MIT License


Languages

Language:Python 99.0%Language:Makefile 0.7%Language:Dockerfile 0.3%