parmarjh / awesome-foundation-and-multimodal-models

πŸ‘οΈ + πŸ’¬ + 🎧 = πŸ€– Curated list of top foundation and multimodal models! [Paper + Code]

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

awesome foundation and multimodal models

πŸ‘οΈ + πŸ’¬ + 🎧 = πŸ€–

foundation model - a pre-trained machine learning model that serves as a base for a wide range of downstream tasks. It captures general knowledge from a large dataset and can be fine-tuned to perform specific tasks more effectively.

multimodal model - a model that can process multiple modalities (e.g. text, image, video, audio, etc.) at the same time.

πŸ—žοΈ papers

CogVLM: Visual Expert for Pretrained Language Models

arXiv GitHub Gradio

Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, Jie Tang

  • Date: 06-11-2023
  • Modalities: πŸ‘οΈ + πŸ’¬
  • Tasks: Image Captioning, VQA

Fuyu-8B: A Multimodal Architecture for AI Agents

Gradio

Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, Sağnak Taşırlar

  • Date: 17-10-2023
  • Modalities: πŸ‘οΈ + πŸ’¬
  • Tasks: Image Classification, Image Captioning, VQA, Find Text in Image

Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond

arXiv GitHub

Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou

  • Date: 24-09-2023
  • Modalities: πŸ‘οΈ + πŸ’¬
  • Tasks: Image Captioning, VQA

AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining

arXiv GitHub Gradio

Haohe Liu, Qiao Tian, Yi Yuan, Xubo Liu, Xinhao Mei, Qiuqiang Kong, Yuping Wang, Wenwu Wang, Yuxuan Wang, Mark D. Plumbley

  • Date: 10-08-2023
  • Modalities: πŸ’¬οΈ + 🎧
  • Tasks: Text-to-Audio, Text-to-Speech

OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models

arXiv GitHub Gradio

Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt

  • Date: 02-08-2023
  • Modalities: πŸ‘οΈ + πŸ’¬
  • Tasks: Image Classification, Image Captioning, VQA

Kosmos-2: Grounding Multimodal Large Language Models to the World

arXiv GitHub Gradio

Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei

  • Date: 26-07-2023
  • Modalities: πŸ‘οΈ + πŸ’¬
  • Tasks: Image Captioning, VQA, Phrase Grounding

LLaVA: Large Language and Vision Assistant

arXiv GitHub Gradio

Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee

  • Date: 17-04-2023
  • Modalities: πŸ‘οΈ + πŸ’¬
  • Tasks:

ImageBind: One Embedding Space To Bind Them All

arXiv GitHub

Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra

  • Date: 09-05-2023
  • Modalities: πŸ‘οΈ + πŸ’¬ + 🎧
  • Tasks:

Segment Anything

arXiv GitHub Colab

Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr DollΓ‘r, Ross Girshick

  • Date: 05-04-2023
  • Modalities: πŸ‘οΈ
  • Tasks:

Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection

arXiv GitHub Gradio Colab

Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang

  • Date: 09-03-2023
  • Modalities: πŸ‘οΈ + πŸ’¬
  • Tasks:

BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models

arXiv GitHub Gradio Colab

Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi

  • Date: 30-01-2023
  • Modalities: πŸ‘οΈ + πŸ’¬
  • Tasks:

OWL-ST: Scaling Open-Vocabulary Object Detection

arXiv Gradio

Matthias Minderer, Alexey Gritsenko, Neil Houlsby

  • Date: 16-01-2023
  • Modalities: πŸ‘οΈ + πŸ’¬
  • Tasks:

Whisper: Robust Speech Recognition via Large-Scale Weak Supervision

arXiv GitHub Colab

Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever

  • Date: 06-12-2022
  • Modalities: πŸ’¬οΈ + 🎧
  • Tasks:

OWL-ViT: Simple Open-Vocabulary Object Detection with Vision Transformers

arXiv GitHub Gradio

Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, Neil Houlsby

  • Date: 12-05-2022
  • Modalities: πŸ‘οΈ + πŸ’¬
  • Tasks:

CLIP: Learning Transferable Visual Models From Natural Language Supervision

arXiv GitHub Colab

Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever

  • Date: 26-02-2021
  • Modalities: πŸ‘οΈ + πŸ’¬
  • Tasks:

🦸 contribution

We would love your help in making this repository even better! If you know of an amazing paper that isn't listed here, or if you have any suggestions for improvement, feel free to open an issue or submit a pull request.

About

πŸ‘οΈ + πŸ’¬ + 🎧 = πŸ€– Curated list of top foundation and multimodal models! [Paper + Code]


Languages

Language:Python 100.0%