LUMIA-Group / I2SRM

Official implementation for "I2SRM: Intra- and Inter-Sample Relationship Modeling for Multimodal Information Extraction" (ACM Multimedai Asia 2023)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

I2SRM

Codes for ACM Multimedia Asia 2023 paper I2SRM: Intra- and Inter-Sample Relationship Modeling for Multimodal Information Extraction.

1 . An overview

The approach includes the intra-sample and inter-sample relationship modeling modules. The intra-sample relationship modeling module can rectify different modality distributions. The inter-sample relationship modeling module can enhance representations across samples by AttnMixup Strategy.

Please install the following requirements:

transformers==4.11.3
torchvision==0.8.2
torch==1.7.1

2 . Dataset Preparation

The multimodal relation extraction dataset MNRE can be downloaded here.

The multimodal named entity recognition: Twitter2015 and Twitter2017 can be downloaded here ( twitter-2015 and twitter-2017 ).

3 . Quick start

To train the I2SRM model, please use the following bash command:

bash run.sh

4 . Result testing

For testing, please delete the "--save_path" line and add load your checkpoint by leveraging "--load_path" in run.sh.

5 . Citation

If you find our paper inspiring, please cite:

@article{huang2023i2srm,
  title={I2SRM: Intra-and Inter-Sample Relationship Modeling for Multimodal Information Extraction},
  author={Huang, Yusheng and Lin, Zhouhan},
  journal={arXiv preprint arXiv:2310.06326},
  year={2023}
}

About

Official implementation for "I2SRM: Intra- and Inter-Sample Relationship Modeling for Multimodal Information Extraction" (ACM Multimedai Asia 2023)


Languages

Language:Python 99.6%Language:Shell 0.4%