Guardrails AI's repositories
guardrails
Adding guardrails to large language models.
detect_pii
Guardrails AI: PII Filter - Validates that any text does not contain any PII
competitor_check
Guardrails AI: Competitor Check - Validates that LLM-generated text is not naming any competitors from a given list
guardrails-api-client
OpenAPI Specifications and scripts for generating SDKs for the various Guardrails services
mentions_drugs
Validate that the generated text does not contain a drug name
qa_relevance_llm_eval
Guardrails AI: QA Relevance LLM eval - Validates that an answer is relevant to the question asked by asking the LLM to self evaluate
reading_time
Guardrails AI: Reading time validator - Validates that the a string can be read in less than a certain amount of time.
saliency_check
Guardrails AI: Saliency check - Checks that the summary covers the list of topics present in the document
similar_to_document
Guardrails AI: Similar to Document - Validates that a value is similar to the document
toxic_language
Guardrails AI: Toxic language - Validates that the generated text is toxic
unusual_prompt
A Guardrails AI input validator that detects if the user is trying to jailbreak an LLM using unusual prompting techniques that involve jailbreaking and tricking the LLM
valid_address
A Guardrails AI validator that validates whether a given address is valid
valid_choice
Guardrails AI: Valid choices - validates that a value is within the acceptable choices
valid_json
Guardrails AI: Valid JSON - Validates that a value is parseable as valid JSON.
valid_range
Guardrails AI: Valid range - validates that a value is within a range
high_quality_translation_validator
Fork of BrainLogic AI's validator
interfaces
Shared interfaces defined in JSON Schema.
llm_critic
A Guardrails AI validator that validates LLM responses by grading + evaluating them against a given set of criteria / metrics
provenance_embeddings
Guardrails AI: Provenance Embeddings - Validates that LLM-generated text matches some source text based on distance in embedding space
provenance_llm
Guardrails AI: Provenance LLM - Validates that the LLM-generated text is supported by the provided contexts.
quotes_price
Check if the generated text contains a price quote in the given currency
response_evaluator
A Guardrails AI validator that validates LLM responses by re-prompting the LLM to self-evaluate
responsiveness_check
A validator which ensures that a generated output answers the prompt given.
restricttotopic
Validator for GuardrailsHub to check if a text is related with a topic.
web_sanitization
Scans LLM outputs for code, code fragments, and keys