This is the repository for the paper "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding".
You can use this repository to evaluate the models. To reproduce the models, use SKGInstruct in your preferred finetuning framework. The checkpoitns are being released on Huggingface.
The processed test data is already provided, but the prompts used for training and testing can be found in /prompts
- Arxiv Link: https://arxiv.org/abs/2402.16671
- Website: https://tiger-ai-lab.github.io/StructLM/
Easy reproduction can be done with the Llama-Factory.
- Follow the data preparation steps on their repo to add one of the StructLM datasets from huggingface
- use the parameters in the bash script
StructLM_finetune.yaml
, as a reference replacing the parametres in block quotes [] with your paths. Then start the training likellamafactory-cli train StructLM_finetuning.yaml
, as such
đź’ˇ Unfortunately, we have seen that the original StructLM-7B checkpoint is currently broken. Please use the StructLM-7B-Mistral model instead.
Requirements:
- Python 3.10
- Linux
- support for CUDA 11.8
pip install -r requirements.txt
./download.sh
this will download
- StructLM-7B-Mistral
- The raw data required for executing evaluation
- The processed test data splits ready for evaluation
We can run the inference on the donwloaded checkpoint.
python mistral-fix-data.py
./run_test_eval.sh StructLM-7B-Mistral
You can download these models seperately with
huggingface-cli download --repo-type=model --local-dir=models/ckpts/StructLM-13B TIGER-Lab/StructLM-13B
Then, you can run the inference on the downloaded checkpoints.
./run_test_eval.sh StructLM-13B
./run_test_eval.sh StructLM-34B
These evaluation will generate the results in outputs/StructLM-*/
The evaluation metrics in this repository were adapted and modified from the evaluation files found in https://github.com/HKUNLP/UnifiedSKG
@misc{zhuang2024structlm,
title={StructLM: Towards Building Generalist Models for Structured Knowledge Grounding},
author={Alex Zhuang and Ge Zhang and Tianyu Zheng and Xinrun Du and Junjie Wang and Weiming Ren and Stephen W. Huang and Jie Fu and Xiang Yue and Wenhu Chen},
year={2024},
eprint={2402.16671},
archivePrefix={arXiv},
primaryClass={cs.CL}
}