aiseei / llamp

A web app and Python API for multi-modal RAG framework to ground LLMs on high-fidelity materials informatics. An agentic materials scientist powered by @materialsproject, @langchain-ai, and @openai

Home Page:http://ingress.llamp.development.svc.spin.nersc.org/about

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LLaMP ๐Ÿฆ™๐Ÿ”ฎ
arXiv Website Github Stars

Large Language Model Made Powerful for High-fidelity Materials Knowledge Retrieval and Distillation

Tip

TL;DR: LLaMP is a multimodal retrieval-augmented generation (RAG) framework of hierarchical ReAct agents that can dynamically and recursively interact with Materials Project to ground LLMs on high-fidelity materials informatics.

This repository accompanies our paper LLaMP: Large Language Model Made Powerful for High-fidelity Materials Knowledge Retrieval and Distillation. Our codebase is built upon LangChain and is designed to be modular and extensible, and can be used to reproduce the experiments in the paper, as well as to develop new experiments.

LLaMP is also a homonym of Large Language model Materials Project. ๐Ÿ˜‰ It empowers LLMs with large-scale computational materials database to reduce the likelihood of hallucination for materials informatics.

๐Ÿ”ฎ Quick Start

Python API

git clone https://github.com/chiang-yuan/llamp.git
cd llamp/api
pip install -e .

After installation, check out the notebooks in experiments to start.

(Optional) Docker Web Interface

docker-compose up --build

๐Ÿ‘‹ Contributing

We understand sometime it is difficult to navigate Materials Project database! We want everyone to be able to access materials informatics through conversational AI. We are looking for contributors to help us build a more powerful and user-friendly LLaMP to support more MP API endpoints or external datastore and agents.

To contirbute to LLaMP, please follow these steps:

  1. Fork the repository
  2. Set up environment variables
    cp .env.example .env.local
  3. Deploy local development environment
    docker-compose up
  4. Make changes and submit a pull request

๐ŸŒŸ Authors and Citation

Alt

If you use LLaMP, our code and data in your research, please cite our paper:

@article{chiang2024llamp,
  title={LLaMP: Large Language Model Made Powerful for High-fidelity Materials Knowledge Retrieval and Distillation},
  author={Chiang, Yuan and Chou, Chia-Hong and Riebesell, Janosh},
  journal={arXiv preprint arXiv:2401.17244},
  year={2024}
}

๐Ÿค— Acknowledgements

We thank Matthew McDermott (@mattmcdermott), Jordan Burns in Materials Science and Engineering at UC Berkeley for their valuable feedback and suggestions. We also thank the Materials Project team for their support and for providing the data used in this work. We also thank Dr. Karlo Berket (@kbuma) and Dr. Anubhav Jain (@computron) for their advice and guidance.

About

A web app and Python API for multi-modal RAG framework to ground LLMs on high-fidelity materials informatics. An agentic materials scientist powered by @materialsproject, @langchain-ai, and @openai

http://ingress.llamp.development.svc.spin.nersc.org/about

License:Other


Languages

Language:Jupyter Notebook 62.2%Language:HTML 28.7%Language:Python 7.0%Language:Svelte 1.9%Language:TypeScript 0.1%Language:Dockerfile 0.1%Language:JavaScript 0.0%Language:Shell 0.0%Language:CSS 0.0%