cryptoman0162754 / guardrail-ml

🛡️Build LLM applications safely and reliably🛡️

Home Page:https://useguardrail.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

🛡️Guardrail ML

License Python 3.7+ Code style: black

plot

Guardrail ML is toolkit for developers to safely bring AI from prototype to production. Our SDK helps build production-grade LLM applications quickly and reliably.

Benefits

  • 🚀build production-grade LLM applications quickly and reliably
  • 📝customize to your unique use case and automate workflows
  • 💸improve performance and reduce cost, and deploy with confidence

Features

  • 🛠️ evaluate and track prompts and LLM outputs with automated text and NLP metrics
  • 🤖 benchmark domain-specific tasks with automated agent simulated conversations
  • 🛡️ safeguard LLMs with our customizable firewall and enforce policies with guardrails

Quickstart

Open In Colab

Installation 💻

  1. Get API Key

  2. To install guardchain, use the Python Package Index (PyPI) as follows:

pip install guardrail-ml

Usage 🛡️🔗

from guardrail.client import run_metrics
from guardrail.client import run_simple_metrics
from guardrail.client import create_dataset

# Output/Prompt Metrics
run_metrics(output="Guardrail is an open-source toolkit for building domain-specific language models with confidence. From domain-specific dataset creation and custom     evaluations to safeguarding and redteaming aligned with policies, our tools accelerates your LLM workflows to systematically derisk deployment.",
            prompt="What is guardrail-ml?",
            model_uri="llama-v2-guanaco")

# View Logs
con = sqlite3.connect("logs.db")
df = pd.read_sql_query("SELECT * from logs", con)
df.tail(20)

# Generate Dataset from PDF
create_dataset(model="OpenAssistant/falcon-7b-sft-mix-2000",
               tokenizer="OpenAssistant/falcon-7b-sft-mix-2000",
               file_path="example-docs/Medicare Appeals Paper FINAL.pdf",
               output_path="./output.json",
               load_in_4bit=True)

More Colab Notebooks

4-bit QLoRA of llama-v2-7b with dolly-15k (07/21/23): Open In Colab

Fine-Tuning Dolly 2.0 with LoRA: Open In Colab

Inferencing Dolly 2.0: Open In Colab

Related AI Papers & Resources:

About

🛡️Build LLM applications safely and reliably🛡️

https://useguardrail.com

License:Apache License 2.0


Languages

Language:Python 100.0%Language:CSS 0.0%