TIGER-AI-Lab / StructLM

Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)

Home Page:https://tiger-ai-lab.github.io/StructLM/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

StructLM

This is the repository for the paper "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding".

You can use this repository to evaluate the models. To reproduce the models, use SKGInstruct in your preferred finetuning framework. The checkpoitns are being released on Huggingface.

The processed test data is already provided, but the prompts used for training and testing can be found in /prompts

Table of Contents

Links

Training

Easy reproduction can be done with the Llama-Factory.

  1. Follow the data preparation steps on their repo to add one of the StructLM datasets from huggingface
  2. use the parameters in the bash script StructLM_finetune.yaml, as a reference replacing the parametres in block quotes [] with your paths. Then start the training like llamafactory-cli train StructLM_finetuning.yaml, as such

Evaluate StructLM-7B

đź’ˇ Unfortunately, we have seen that the original StructLM-7B checkpoint is currently broken. Please use the StructLM-7B-Mistral model instead.

Install Requirements

Requirements:

  • Python 3.10
  • Linux
  • support for CUDA 11.8

pip install -r requirements.txt

Download files

./download.sh

this will download

  1. StructLM-7B-Mistral
  2. The raw data required for executing evaluation
  3. The processed test data splits ready for evaluation

Run evaluation

For StructLM-7B-Mistral

We can run the inference on the donwloaded checkpoint.

python mistral-fix-data.py
./run_test_eval.sh StructLM-7B-Mistral

For StructLM-13B/34B

You can download these models seperately with

huggingface-cli download --repo-type=model --local-dir=models/ckpts/StructLM-13B TIGER-Lab/StructLM-13B

Then, you can run the inference on the downloaded checkpoints.

./run_test_eval.sh StructLM-13B
./run_test_eval.sh StructLM-34B

These evaluation will generate the results in outputs/StructLM-*/

Acknowledgements

The evaluation metrics in this repository were adapted and modified from the evaluation files found in https://github.com/HKUNLP/UnifiedSKG

Cite

@misc{zhuang2024structlm,
    title={StructLM: Towards Building Generalist Models for Structured Knowledge Grounding},
    author={Alex Zhuang and Ge Zhang and Tianyu Zheng and Xinrun Du and Junjie Wang and Weiming Ren and Stephen W. Huang and Jie Fu and Xiang Yue and Wenhu Chen},
    year={2024},
    eprint={2402.16671},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

About

Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)

https://tiger-ai-lab.github.io/StructLM/

License:MIT License


Languages

Language:Python 99.7%Language:Shell 0.3%