KLGR123 / AllTogether

This is the code repository for paper "AllTogether: Investigating the Efficacy of Spliced Prompt for Web Navigation using Large Language Models".

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AllTogether

This is the code repository for paper "AllTogether: Investigating the Efficacy of Spliced Prompt for Web Navigation using Large Language Models".

Large Language Models (LLMs) have emerged as promising agents for web navigation tasks, interpreting objectives and interacting with web pages. However, the efficiency of spliced prompts for such tasks remains underexplored. We introduces AllTogether, a standardized prompt template that enhances web task context representation, thereby improving LLM’s performance in HTML-based web navigation. We then evaluate the efficacy of this approach through prompt learning and instruction finetuning based on open-source Llama-2 and API-accessible GPT models. Our results reveal that models like GPT-4 outperform smaller models in web navigation tasks. Additionally, we find that the length of HTML snippet and history trajectory significantly influence performance, and prior step-by-step instructions without environmental feedback prove less effective than real-time feedback methodology. Overall, we believe our work provides valuable insights for future research in LLM-driven web agents.

Requirements

See Mind2Web.

About

This is the code repository for paper "AllTogether: Investigating the Efficacy of Spliced Prompt for Web Navigation using Large Language Models".


Languages

Language:Python 100.0%