lz356 / building_llm_agent_library

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

building_llm_agent_library

Please cite: Liang Zhang. 2024. Large Language Model-Based Agent Schema and Library for Building Energy Analysis and Modeling. 2025 ASHRAE Winter Conference. [Under Review]

Reference code from the MOOC "AI Agents in LangGraph": https://www.deeplearning.ai/short-courses/ai-agents-in-langgraph/

Large language models (LLMs) have shown exceptional abilities in reasoning, planning, and code generation, making them highly promising for automating data processing and modeling in the building energy domain. LLM prompting, which process input and generate output in a one-time interaction, are not capable of handling complex engineering tasks. LLM agents expand the capabilities of prompting by functioning as an autonomous, interactive, goal-oriented system with an LLM at its core. Despite their potential, there is currently no standardized paradigm that researchers and engineers can follow to create, access, and share effective LLM agents without starting from scratch in the building energy sector. This paper introduces a JSON-based agent schema designed to standardize the description of LLM agents. Additionally, the paper introduces an open-source library on GitHub that serves as a centralized repository for LLM agents designed for building energy analysis and modeling, all structured according to this schema. This library is publicly accessible, allowing users to utilize and upload agents, thereby enhancing the accessibility of LLM agents.

The library is currently underdevelopment with only two case studies. Two case studies showcase the schema's effectiveness with two example agents, both archived in the library: a Reasoning + Action (ReAct) workflow for data-driven energy modeling, and a Flow Engineering workflow for generating EnergyPlus models from building specifications.

This library is built on a standardized, platform-agnostic schema that focuses solely on the agentic workflow and procedural aspects of LLM agents, independent of the underlying tools. By providing detailed workflows and standardized definitions within this library, developers will be able to share and collaborate more effectively, fostering a more unified and versatile approach to LLM agent development. This proposed schema and library aim to facilitate the systematic reuse and deployment of LLM agents, particularly in the building energy sector. The establishment of such a library will serve as a foundational step toward creating a more cohesive and accessible environment for the development and application of intelligent agents, ultimately advancing the field of building energy analysis and modeling.

The agent library employs a structure inspired by LangGraph, a leading agent workflow tool, which integrates the following key components: 1) START/END point indicating the start and end of the workflow. 2) nodes, including agent nodes that call and get output from LLMs to plan, reason, and generate content, and action nodes that call, execute, and get output from tools. 3) tools, which can be called by action nodes to conduct detailed and domain specific tasks. 4) edges, which connect two or three nodes. Simple edges link two components, such as the START point, agent node, action node, or END point. Conditional edges create more complex connections among three components—one serving as the input and the other two as outputs—with the path determined by functions. 5) memories, which details what input and output message/data are transmitted between nodes during the agent run. These elements work together to define the flow and logic of the agent operations. More details about the schema can be found in the paper: Liang Zhang. 2024. Large Language Model-Based Agent Schema and Library for Building Energy Analysis and Modeling. 2025 ASHRAE Winter Conference. [Under Review]

The library is organized as follows: 1) A code template ("code_template.py"/"code_template.ipynb") that help developers to creates agentic workflows based on the JSON-based schema. 2) An agentic workflow folder named "LLM_agentic_workflows." Within this folder, each agentic workflow has a subfolder named according to its ID. Each subfolder contains: i) the JSON schema file, ii) a run.py (Python script), run.ipynb (Jupyter Notebook), or cloud service like Google Colab to execute the entire workflow without errors, iii) a "tools" subfolder defined in the JSON file, where functions are stored in ".py" format, iv) an "evaluation" subfolder with scripts to run the results and manually verify their functionality, and v) a "data" subfolder that stores all input, temporary, and output files. The structure is illustrated in Figure 3.

The code template is a crucial component of the library, providing the foundational structure for agent development. It extracts information from the agent schema to streamline and even automate the agent development process. This code template can be adapted to various LLM agent ecosystems, including AutoGen and LangChain. In this study, due to space constraints, the paper focuses on the template designed to work within the LangChain environment, specifically using the LangGraph method. LangGraph is a powerful tool that facilitates the creation and management of complex agent workflows within the LangChain ecosystem.

About

License:MIT License


Languages

Language:Jupyter Notebook 99.4%Language:Python 0.6%