lavague-ai / LaVague

Large Action Model framework to develop AI Web Agents

Home Page:https://docs.lavague.ai/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Stargazers Issues Forks Contributors


LaVague Logo

Welcome to LaVague

A Large Action Model framework for developing AI Web Agents

πŸ„β€β™€οΈ What is LaVague?

LaVague is an open-source Large Action Model framework to develop AI Web Agents.

Our web agents take an objective, such as "Print installation steps for Hugging Face's Diffusers library" and performs the required actions to achieve this goal by leveraging our two core components:

  • A World Model that takes an objective and the current state (aka the current web page) and turns that into instructions
  • An Action Engine which β€œcompiles” these instructions into action code, e.g. Selenium or Playwright & execute them

πŸš€ Getting Started

Demo

Here is an example of how LaVague can take multiple steps to achieve the objective of "Go on the quicktour of PEFT":

Demo for agent

Hands-on

To do this, the steps are simple:

  1. Download LaVague with:
pip install lavague
  1. Use our framework to build a Web Agent and implement the objective:
from lavague.retrievers import OpsmSplitRetriever
from lavague.defaults import default_get_selenium_driver
from lavague.action_engine import ActionEngine
from lavague.world_model import GPTWorldModel
from lavague.agents import WebAgent

driver = default_get_selenium_driver()
action_engine = ActionEngine()

world_model = GPTWorldModel.from_hub("hf_example")

agent = WebAgent(driver, action_engine, world_model)
agent.get("https://huggingface.co/docs")
agent.run("Go on the quicktour of PEFT")

For more information on this example and how to use LaVague, see our quick-tour.

Note, these examples use our default OpenAI API configuration and you will need to set the OPENAI_API_KEY variable in your local environment with a valid API key for these to work.

For an end-to-end example of LaVague in a Google Colab, see our quick-tour notebook

πŸ™‹ Contributing

We would love your help and support on our quest to build a robust and reliable Large Action Model for web automation.

To avoid having multiple people working on the same things & being unable to merge your work, we have outlined the following contribution process:

  1. πŸ“’ We outline tasks on our backlog: we recommend you check out issues with the help-wanted labels & good first issue labels
  2. πŸ™‹β€β™€οΈ If you are interested in working on one of these tasks, comment on the issue!
  3. 🀝 We will discuss with you and assign you the task with a community assigned label
  4. πŸ’¬ We will then be available to discuss this task with you
  5. ⬆️ You should submit your work as a PR
  6. βœ… We will review & merge your code or request changes/give feedback

Please check out our contributing guide for a more detailed guide.

If you want to ask questions, contribute, or have proposals, please come on our Discord to chat!

πŸ—ΊοΈ Roadmap

TO keep up to date with our project backlog here.

🚨 Security warning

Note, this project executes LLM-generated code using exec. This is not considered a safe practice. We therefore recommend taking extra care when using LaVague and running LaVague in a sandboxed environment!

πŸ“ˆ Data collection

We want to build a dataset that can be used by the AI community to build better Large Action Models for better Web Agents. You can see our work so far on building community datasets on our BigAction HuggingFace page.

This is why LaVague collects the following user data telemetry by default:

  • Version of LaVague installed
  • Code generated for each web action step
  • LLM used (i.e GPT4)
  • Randomly generated anonymous user ID
  • Whether you are using a CLI command or our library directly
  • The URL you performed an action on
  • Whether the action failed or succeeded
  • Error message, where relevant
  • The source nodes (chunks of HTML code retrieved from the web page to perform this action)

🚫 Turn off all telemetry

If you want to turn off all telemetry, you can set the TELEMETRY_VAR environment variable to NONE.

If you are running LaVague locally in a Linux environment, you can persistently set this variable for your environment with the following steps:

  1. Add TELEMETRY_VAR=NONE to your ~/.bashrc, ~/.bash_profile, or ~/.profile file (which file you have depends on your shell and its configuration)
  2. Use `source ~/.bashrc (or .bash_profile or .profile) to apply your modifications without having to log out and back in

In a notebook cell, you can use:

import os
os.environ['TELEMETRY_VAR'] = "NONE"

About

Large Action Model framework to develop AI Web Agents

https://docs.lavague.ai/en/latest/

License:Apache License 2.0


Languages

Language:Python 100.0%