awslabs / agent-evaluation

A generative AI-powered framework for testing virtual agents.

Home Page:https://awslabs.github.io/agent-evaluation/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

PyPI - Version PyPI - Python Version GitHub License security: bandit Code style: black Built with Material for MkDocs

Agent Evaluation

Agent Evaluation is a generative AI-powered framework for testing virtual agents.

Internally, Agent Evaluation implements an LLM agent (evaluator) that will orchestrate conversations with your own agent (target) and evaluate the responses during the conversation.

✨ Key features

  • Built-in support for popular AWS services including Amazon Bedrock, Amazon Q Business, and Amazon SageMaker. You can also bring your own agent to test using Agent Evaluation.
  • Orchestrate concurrent, multi-turn conversations with your agent while evaluating its responses.
  • Define hooks to perform additional tasks such as integration testing.
  • Can be incorporated into CI/CD pipelines to expedite the time to delivery while maintaining the stability of agents in production environments.

πŸ“š Documentation

To get started, please visit the full documentation here. To contribute, please refer to CONTRIBUTING.md

πŸ‘ Contributors

Shout out to these awesome contributors:

About

A generative AI-powered framework for testing virtual agents.

https://awslabs.github.io/agent-evaluation/

License:Apache License 2.0


Languages

Language:Python 95.4%Language:Jinja 4.6%