A research implementation of the ReAct (Reasoning + Acting) paradigm for web research. The system analyzes a task, plans concrete tool actions, executes them (search, browse, optional code), and synthesizes an answer with sources and basic verification. The design is task-agnostic: no topic-specific heuristics are required to operate across different questions.
This repository is intended for studying structured approaches to web research and evaluation. It is not a production system.
- Task-agnostic analysis of question structure to select a synthesis strategy without topic-specific rules
- Adaptive planning that produces actionable steps over registered tools (search, browser, present, optional code)
- Multi-strategy synthesis (extract-and-verify, aggregate-and-filter, collect-and-organize, comprehensive-synthesis)
- Robust parameter and URL handling with snippet fallback when pages cannot be fetched
- Source attribution and lightweight cross-source verification
- Task-Adaptive Reasoning: infers expected answer type (factual lookup, list, comparison, narrative) from the task text
- Planning and Tool Use: generates steps that call SearchTool, BrowserTool, and PresentationTool with validated parameters
- Content Extraction: prefers main content from web pages; falls back to search snippets when needed
- Verification: groups paraphrases and scores support by distinct domains (extract-and-verify strategy)
- Structured Output: formats results deterministically for the requested answer type
- Configuration and Logging: configurable limits and detailed logs for analysis and evaluation
- No topic-specific filters or keywords are required for operation
- Requested list size is inferred from the task text (e.g., “10 statements”), not hardcoded
- De-duplication is based on content similarity and source/date keys, not hand-written topic rules
- Statement/quote extraction uses generic patterns (quotes and sentence heuristics)
- Verification relies on cross-source/domain support rather than task-specific logic
- SearchTool: returns a list of results (title, link, snippet)
- BrowserTool: fetches a URL and extracts main content or full page; can aggregate search snippets if a URL is not available
- PresentationTool: assembles the final answer for the chosen strategy
- CodeGeneratorTool: optional, for tasks that require computation (e.g., filtering or plotting)
See implementation: agent/agent.py, agent/planner.py, tools/search.py, tools/browser.py, tools/presentation_tool.py.
- Analyze the task to determine answer type and information targets
- Create a plan: search → browse (one or more pages) → present (optionally code when needed)
- Execute tools with parameter and URL resolution; use snippet fallback as needed
- Synthesize the answer using one of four strategies; attribute sources and verify where applicable
This project implements the ReAct paradigm with dynamic task analysis and adaptive synthesis:
graph TD
A[Main] --> B[WebResearchAgent]
B --> C1[Memory]
B --> C2[Planner]
B --> C3[Comprehension]
B --> C4[ToolRegistry]
%% Enhanced ReAct: Dynamic Reasoning
C3 -->|"Dynamic Analysis"| G1[Task Analysis]
G1 --> G2[Answer Type Detection]
G1 --> G3[Information Target ID]
G1 --> G4[Output Structure Inference]
C2 -->|"Adaptive Planning"| D[Plan]
D -->|Contains| E[PlanSteps]
%% ReAct: Acting component
C4 -->|Registers| F1[SearchTool]
C4 -->|Registers| F2[BrowserTool]
C4 -->|Registers| F3[CodeGeneratorTool]
C4 -->|Registers| F4[PresentationTool]
%% Enhanced ReAct: Multi-Strategy Synthesis
C3 -->|"Strategy Selection"| S1[Extract & Verify]
C3 -->|"Strategy Selection"| S2[Aggregate & Filter]
C3 -->|"Strategy Selection"| S3[Collect & Organize]
C3 -->|"Strategy Selection"| S4[Comprehensive Synthesis]
%% ReAct: Observation component
C1 -->|"Stores"| M1[Results & Entities]
%% ReAct: Iteration cycle
B -->|"1. Analyze Task"| G1
G1 -->|"2. Plan Strategy"| C2
C2 -->|"3. Execute Actions"| C4
C4 -->|"4. Synthesize Answer"| S1
S1 -->|"5. Verify & Refine"| B
style B fill:#f9f,stroke:#333,stroke-width:2px
style G1 fill:#fbb,stroke:#333,stroke-width:2px
style S1 fill:#bfb,stroke:#333,stroke-width:2px
style C1 fill:#bbf,stroke:#333
style C2 fill:#bbf,stroke:#333
style C3 fill:#bbf,stroke:#333
style C4 fill:#bbf,stroke:#333
style F1 fill:#bfb,stroke:#333
style F2 fill:#bfb,stroke:#333
style F3 fill:#bfb,stroke:#333
style F4 fill:#bfb,stroke:#333
The diagram above illustrates how the Web Research Agent processes research tasks:
-
Task Analysis Phase:
- When a user submits a research question, the system first analyzes the task structure
- The Comprehension component uses pattern recognition to detect answer types (factual, comparative, list-based, etc.)
- It identifies specific information targets needed to answer the question
- It determines the appropriate output structure for the anticipated answer
-
Planning Phase:
- Based on the task analysis, the Planner creates a series of search strategies
- It generates concrete plan steps targeting the identified information needs
- Each plan step specifies what information to retrieve and how to process it
-
Action Phase:
- The ToolRegistry orchestrates the execution of research tools:
- SearchTool finds relevant information sources
- BrowserTool extracts content from web pages
- CodeGeneratorTool creates analysis scripts when needed
- PresentationTool formats findings appropriately
- The ToolRegistry orchestrates the execution of research tools:
-
Synthesis Phase:
- Based on the question type, one of four synthesis strategies is selected:
- Extract-and-Verify for factual questions
- Aggregate-and-Filter for comparative analyses
- Collect-and-Organize for list-building tasks
- Comprehensive-Synthesis for complex, multi-faceted questions
- The Memory component provides context by storing intermediate findings and entities
- Based on the question type, one of four synthesis strategies is selected:
-
Refinement Loop:
- If the synthesized answer is incomplete, the system may return to planning
- This iterative process continues until a satisfactory answer is produced
- The final output is tailored to directly address the specific question asked
This research implementation demonstrates how a structured approach to web research can adapt to different question types without relying on hardcoded rules.
- Python 3.9 or higher
- pip (Python package installer)
-
Clone the repository:
git clone https://github.com/ashioyajotham/web_research_agent.git cd web_research_agent
-
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
The system requires the following external services for operation:
- Gemini API: Language model for reasoning and synthesis
- Serper API: Web search results for information gathering
Create a .env
file in the project root:
GEMINI_API_KEY=your_gemini_api_key
SERPER_API_KEY=your_serper_api_key
export GEMINI_API_KEY=your_gemini_api_key
export SERPER_API_KEY=your_serper_api_key
from config.config_manager import init_config
config = init_config()
config.update('gemini_api_key', 'your_gemini_api_key')
config.update('serper_api_key', 'your_serper_api_key')
These parameters control the system's behavior and can be modified for experimental purposes:
Parameter | Environment Variable | Description | Default |
---|---|---|---|
gemini_api_key | GEMINI_API_KEY | API key for Gemini LLM | - |
serper_api_key | SERPER_API_KEY | API key for Serper.dev search | - |
log_level | LOG_LEVEL | Logging detail level | INFO |
max_search_results | MAX_SEARCH_RESULTS | Search results to process | 5 |
memory_limit | MEMORY_LIMIT | Working memory capacity | 100 |
output_format | OUTPUT_FORMAT | Results format (markdown, text, html) | markdown |
timeout | REQUEST_TIMEOUT | Web request timeout (seconds) | 30 |
-
Create a text file with research questions:
# tasks.txt Find the name of the COO of the organization that mediated talks between US and Chinese AI companies in Geneva in 2023. By what percentage did Volkswagen reduce their Scope 1 and Scope 2 greenhouse gas emissions in 2023 compared to 2021?
Note: Empty lines between tasks help the system distinguish between separate questions.
-
Run the research process:
python main.py tasks.txt
-
Results will be saved to the
results/
directory as Markdown files.
For complex queries with multiple requirements:
# multi_criteria_tasks.txt
Compile a list of companies satisfying the following criteria:
They are based in the EU
They operate within the motor vehicle sector
Their greenhouse gas emissions are available for 2021-2023
They earned more than €1B in revenue in 2023
The system recognizes this as a single multi-criteria task and adapts its synthesis strategy accordingly.
python main.py tasks.txt --output custom_output_dir
Option | Description | Default |
---|---|---|
task_file | Path to text file containing tasks | (required) |
--output | Directory to store results | results/ |
The project structure reflects the enhanced ReAct implementation with dynamic analysis:
-
agent/: Core reasoning and coordination
- agent.py: Main controller with dynamic task analysis and multi-strategy synthesis
- comprehension.py: Enhanced reasoning with pattern recognition for answer types
- memory.py: Short-term memory for tracking observations and synthesis context
- planner.py: Adaptive planning based on identified information targets
-
tools/: Action components
- search.py: Information retrieval with robust URL resolution
- browser.py: Content extraction with multiple fallback strategies
- code_generator.py: Data analysis when computational tasks are detected
- presentation_tool.py: Task-adaptive result formatting
- tool_registry.py: Tool management system
-
utils/: Supporting functions
- console_ui.py: Interface components
- formatters.py: Dynamic output structuring
- task_parser.py: Multi-criteria task parsing
- criteria_filter.py: Multi-criteria verification
- logger.py: Detailed reasoning and synthesis tracking
-
config/: Research environment configuration
-
main.py: Entry point and experiment runner
The system implements pattern recognition to analyze any research question and determine:
- Answer Type Detection: Identifies whether the question expects a factual answer, comparison, list, or comprehensive analysis
- Information Target Identification: Determines what specific information needs to be gathered
- Output Structure Inference: Predicts the appropriate format for presenting the answer
- Synthesis Strategy Selection: Chooses from four synthesis approaches based on task characteristics
Used for factual lookup questions requiring specific information:
- Searches for target information across multiple sources
- Cross-validates findings for accuracy
- Provides direct answers with source verification
Applied to comparison and analytical questions:
- Collects relevant data points from multiple sources
- Applies filtering criteria to focus on relevant information
- Synthesizes comparative or analytical insights
Employed for list-building and compilation tasks:
- Systematically gathers items meeting specified criteria
- Organizes findings in structured formats
- Validates completeness of collected information
Used for complex, multi-faceted research questions:
- Integrates information from diverse sources
- Builds coherent narratives or explanations
- Balances breadth and depth of coverage
The system includes robust handling of web search challenges:
- Multiple URL extraction strategies from search results
- Fallback mechanisms for content retrieval failures
- Validation of information sources and URLs
- Graceful degradation when full content is unavailable
The system extracts relevant entities while maintaining focus on answering the specific question:
- People: Names of individuals relevant to the research question
- Organizations: Companies, agencies, groups mentioned in sources
- Roles: Job titles and positions when relevant to the query
- Locations: Geographic information pertinent to the task
- Dates: Temporal references important for the research context
Entity extraction supports the synthesis process but does not drive the output format.
The system implements multiple fallback strategies:
- Content Access Failures: When websites block access, falls back to search snippet analysis
- URL Resolution Issues: Multiple strategies for extracting valid URLs from search results
- Information Gaps: Acknowledges limitations and reports partial findings when complete answers aren't available
- Synthesis Failures: Provides available information even when preferred synthesis strategy fails
You can modify system behavior through configuration:
from config.config_manager import init_config
config = init_config()
config.update('output_format', 'html') # Options: markdown, json, html
config.update('max_search_results', 10) # Increase search breadth
As a research implementation, this project provides insights into both capabilities and current limitations:
- Web Access Constraints: Sites with anti-scraping measures may limit data collection, providing opportunities to study fallback strategies
- Complex Query Formulation: Highly specialized domains sometimes require domain-specific search strategies
- Synthesis Boundary Cases: Edge cases in task analysis provide insights into pattern recognition limitations
- Computational Requirements: Multi-criteria tasks with extensive search requirements demonstrate resource scaling behavior
Detailed logs in the logs/
directory provide research data on:
- Dynamic task analysis decision patterns
- Synthesis strategy selection effectiveness
- URL resolution fallback frequency and success rates
- Entity extraction accuracy across different content types
- Error recovery mechanism performance
These logs are valuable for understanding the system's behavior and identifying areas for algorithmic improvement.
This research implementation welcomes contributions, particularly in areas of:
- Enhanced pattern recognition for task analysis
- Additional synthesis strategies for specialized question types
- Improved robustness in web content extraction
- Performance optimization for large-scale research tasks
Please see CONTRIBUTING.md for guidelines.
This project implements and extends the ReAct (Reasoning + Acting) paradigm from "ReAct: Synergizing Reasoning and Acting in Language Models" (Yao et al., 2022).
The foundational ReAct components:
- Reasoning: Task decomposition and solution planning
- Acting: Tool execution based on reasoning
- Observation: Processing action results
- Iteration: Feedback loops for refinement
This implementation extends ReAct with:
- Dynamic Task Analysis: Pattern recognition for answer type detection without hardcoded rules
- Multi-Strategy Synthesis: Adaptive synthesis based on task characteristics rather than fixed approaches
- Robust Parameter Resolution: Multiple fallback mechanisms for real-world web research challenges
- Task-Focused Output: Direct answer generation aligned with question intent
Key observations from this implementation:
- Pattern Recognition Effectiveness: Dynamic task analysis successfully identifies answer types across diverse question structures
- Synthesis Strategy Impact: Different synthesis strategies show measurable differences in answer quality for different question types
- Fallback Strategy Value: Robust parameter resolution significantly improves success rates for web content access
- Entity vs. Answer Focus: Maintaining task focus while extracting entities produces more relevant outputs than entity-driven approaches
This research implementation draws from established agent concepts and development approaches, including:
- OpenAI Function Calling Guide - Best practices for tool-using agents
- Anthropic's Claude Agent Guide - Methods for reliable agent construction
- LangChain ReAct Implementation - Technical approaches for implementing ReAct
- Chain-of-Thought Prompting - Wei et al. (2022)
- Language Models as Zero-Shot Planners - Huang et al. (2022)
- Faithful Reasoning Using Large Language Models - Creswell et al. (2022)
- Toolformer: Language Models Can Teach Themselves to Use Tools - Schick et al. (2023)