Hongsheng Li 1,2 Yu Qiao 2 Wanli Ouyang2 Xiangyu Yue1,✉
2OpenGVLab,Shanghai AI Laboratory
* Equal Contribution ✉ Corresponding Author
🚩🚩🚩 Shared-Encoder, Unpaired Data, More Modalities
This repository is built to explore the potential and extensiability of transformers for multimodal learning. We utilize the advantages of Transformers to deal with length-variant sequence. Then we proposes the Data-to-Sequence tokenization following a meta-scheme, then we apply it to 12 modalities including text, image, point cloud, audio, video, infrared, hyper-spectral, X-Ray, tabular, graph, time-series, and Inertial Measurement Unit (IMU) data.
After obtaining the token sequence, we employ a modality-shared encoder to extract representation across different modalities. With task-specific heads, Meta-Transformer can handle various tasks on the different modalities, such as: classification, detection, and segmentation.
🌟 News
- 2023.7.23: 🎉🎉🎉 We have released the code and pretrained weights for image understanding and time-series forcasting.
- 2023.7.22: 🌟🌟🌟 Pretrained weights and a usage demo for our Meta-Transformer have been released. Comprehensive documentation and implementation of the image modality are underway and will be released soon. Stay tuned for more exciting updates!⌛⌛⌛
- 2023.7.21: Paper is released at arxiv, and code will be gradually released.
- 2023.7.8: Github Repository Initialization.
🔓 Model Zoo
Open-source Modality-Agnostic Models
Demo of Use for Pretrained Encoder
from timm.models.vision_transformer import Block
ckpt = torch.load("Meta-Transformer_base_patch16_encoder.pth")
encoder = nn.Sequential(*[
Block(
dim=768,
num_heads=12,
mlp_ratio=4.,
qkv_bias=True,
norm_layer=nn.LayerNorm,
act_layer=nn.GELU
)
for i in range(12)])
encoder.load_state_dict(ckpt,strict=True)
🕙 ToDo
- Meta-Transformer with Large Language Models.
- Multimodal Joint Training with Meta-Transformer.
- Support More Modalities and More Tasks.
Contact
Welcome to contribute to our project!
To contact us, never hestitate to send an email to yiyuanzhang.ai@gmail.com
,kaixionggong@gmail.com
, zhangkaipeng@pjlab.org.cn
, or xyyue@ie.cuhk.edu.hk
!
Citation
If the code and paper help your research, please kindly cite:
@article{zhang2023metatransformer,
title={Meta-Transformer: A Unified Framework for Multimodal Learning},
author={Zhang, Yiyuan and Gong, Kaixiong and Zhang, Kaipeng and Li, Hongsheng and Qiao, Yu and Ouyang, Wanli and Yue, Xiangyu},
year={2023},
journal={arXiv preprint arXiv:2307.10802},
}
License
This project is released under the Apache 2.0 license.
Acknowledgement
This code is developed based on excellent open-sourced projects including MMClassification, MMDetection, MMsegmentation, OpenPoints, Time-Series-Library, Graphomer, SpectralFormer, and ViT-Adapter.