Guardrails AI (guardrails-ai)

Guardrails AI

guardrails-ai

Geek Repo

Location:United States of America

Home Page:guardrailsai.com

Github PK Tool:Github PK Tool

Guardrails AI's repositories

guardrails

Adding guardrails to large language models.

Language:PythonLicense:Apache-2.0Stargazers:3479Issues:25Issues:235

detect_pii

Guardrails AI: PII Filter - Validates that any text does not contain any PII

Language:PythonLicense:Apache-2.0Stargazers:2Issues:0Issues:0

competitor_check

Guardrails AI: Competitor Check - Validates that LLM-generated text is not naming any competitors from a given list

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

guardrails-api-client

OpenAPI Specifications and scripts for generating SDKs for the various Guardrails services

Language:PythonStargazers:1Issues:5Issues:0

mentions_drugs

Validate that the generated text does not contain a drug name

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

qa_relevance_llm_eval

Guardrails AI: QA Relevance LLM eval - Validates that an answer is relevant to the question asked by asking the LLM to self evaluate

Language:PythonLicense:Apache-2.0Stargazers:1Issues:3Issues:0

reading_time

Guardrails AI: Reading time validator - Validates that the a string can be read in less than a certain amount of time.

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

saliency_check

Guardrails AI: Saliency check - Checks that the summary covers the list of topics present in the document

Language:PythonLicense:Apache-2.0Stargazers:1Issues:2Issues:1

similar_to_document

Guardrails AI: Similar to Document - Validates that a value is similar to the document

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

toxic_language

Guardrails AI: Toxic language - Validates that the generated text is toxic

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

two_words

Guardrails AI: Two words validator - Validates that a value is two words

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

uppercase

Guardrails AI: Upper case - Validates that a value is upper case

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

valid_address

A Guardrails AI validator that validates whether a given address is valid

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

valid_json

Guardrails AI: Valid JSON - Validates that a value is parseable as valid JSON.

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

valid_range

Guardrails AI: Valid range - validates that a value is within a range

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

valid_url

Guardrails AI: Valid url - Validates that a value is a valid URL

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

high_quality_translation_validator

Fork of BrainLogic AI's validator

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

interfaces

Shared interfaces defined in JSON Schema.

Language:JavaScriptStargazers:0Issues:0Issues:0

llm_critic

A Guardrails AI validator that validates LLM responses by grading + evaluating them against a given set of criteria / metrics

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

provenance_embeddings

Guardrails AI: Provenance Embeddings - Validates that LLM-generated text matches some source text based on distance in embedding space

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

provenance_llm

Guardrails AI: Provenance LLM - Validates that the LLM-generated text is supported by the provided contexts.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

quotes_price

Check if the generated text contains a price quote in the given currency

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

response_evaluator

A Guardrails AI validator that validates LLM responses by re-prompting the LLM to self-evaluate

Language:PythonLicense:Apache-2.0Stargazers:0Issues:3Issues:0

responsiveness_check

A validator which ensures that a generated output answers the prompt given.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

restricttotopic

Validator for GuardrailsHub to check if a text is related with a topic.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

unusual_prompt

A Guardrails AI input validator that detects if the user is trying to jailbreak an LLM using unusual prompting techniques that involve jailbreaking and tricking the LLM

Language:PythonLicense:Apache-2.0Stargazers:0Issues:3Issues:0

web_sanitization

Scans LLM outputs for code, code fragments, and keys

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0