Lavender321 / transformer_object_detection_survey

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

arXiv Maintenance GitHub issues PRs Welcome

Object Detection with Transformers: A Review

This repository provides an up-to-date list of studies addressing improvements in DEtection TRansformer (DETR). It follows the taxonomy provided in the following paper (please cite the paper if you benefit from this repository):

T. Shehzadi, K A. Hashmi, D. Stricker, M Z. Afzal, "Object Detection with Transformers: A Review"

Preprint ⬇️

arXiv

BibTeX entry:

@misc{shehzadi2023object,
      title={Object Detection with Transformers: A Review}, 
      author={Tahira Shehzadi and Khurram Azeem Hashmi and Didier Stricker and Muhammad Zeshan Afzal},
      year={2023},
      eprint={2306.04670},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

How to request addition of a paper

If you know of a paper that addresses transformer-based object detection and is not on this repository, you are welcome to request the addition of that paper by submitting a pull request.

Table of Contents

  1. Statistics overview
  2. DETR and its improvements
    2.1 Backbone Modifications
    2.2 Pre-training Mechanism Modifications
    2.3 Attention Mechanism Modifications
    2.4 Object Queries Modifications
  3. Performance Analysis

1. Statistics overview

Statistics overview of the literature on Transformers. (a) Number of citations per year of Transformer papers. (b) Citations in the last 12 months on Detection Transformer papers. (c) Modification percentage in original DEtection TRansformer (DETR) to improve performance and training convergence (d) Number of peer-reviewed publications per year that used DETR as a baseline. (e) A non-exhaustive timeline overview of important developments in DETR for detection tasks.

2. DEtection TRansformer (DETR) and its improvements

DETR

[DETR] End-to-End Object Detection with Transformers.
Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
ECCV 2020. [paper] [code] [detrex code]

Improvements

2.1. Backbone Modifications

WB-DETR: Transformer-Based Detector without Backbone.
Fanfan Liu, Haoran Wei, Wenzhe Zhao, Guozhen Li, Jingquan Peng, Zihao Li.
ICCV 2021. [paper]

FP-DETR: Detection Transformer Advanced by Fully Pre-training.
Wen Wang, Yang Cao, Jing Zhang, Dacheng Tao.
ICLR 2022. [paper]

2.2. Pre-training Mechanism Modifications

UP-DETR: Unsupervised Pre-training for Object Detection with Transformers.
Zhigang Dai, Bolun Cai, Yugeng Lin, Junying Chen.
CVPR 2021. [paper] [code]

[YOLOS] You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection.
Yuxin Fang*, Bencheng Liao*, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
NeurIPS 2021. [paper] [code]

FP-DETR: Detection Transformer Advanced by Fully Pre-training.
Wen Wang, Yang Cao, Jing Zhang, Dacheng Tao.
ICLR 2022. [paper]

Group DETR v2: Strong Object Detector with Encoder-Decoder Pretraining
Qiang Chen, Jian Wang, Chuchu Han, Shan Zhang, Zexian Li, Xiaokang Chen, Jiahui Chen, Xiaodi Wang, Shuming Han, Gang Zhang, Haocheng Feng, Kun Yao, Junyu Han, Errui Ding, Jingdong Wang
arxiv 2022. [paper]

2.3. Attention Mechanism Modifications

Deformable DETR: Deformable Transformers for End-to-End Object Detection.
Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
ICLR 2021. [paper] [code] [detrex code]

Fast Convergence of DETR with Spatially Modulated Co-Attention.
Peng Gao, Minghang Zheng, Xiaogang Wang, Jifeng Dai, Hongsheng Li .
ICCV 2021. [paper] [code]

Rethinking Transformer-based Set Prediction for Object Detection.
Zhiqing Sun, Shengcao Cao, Yiming Yang, Kris Kitani.
ICCV 2021. [paper] [code]

PnP-DETR: Towards Efficient Visual Analysis with Transformers.
Tao Wang, Li Yuan, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
ICCV 2021. [paper] [code]

Dynamic DETR: End-to-End Object Detection With Dynamic Attention.
Xiyang Dai, Yinpeng Chen, Jianwei Yang, Pengchuan Zhang, Lu Yuan, Lei Zhang.
ICCV 2021. [paper]

Anchor DETR: Query Design for Transformer-Based Object Detection.
Yingming Wang, Xiangyu Zhang, Tong Yang, Jian Sun.
AAAI 2022. [paper] [code]

Sparse DETR: Efficient End-to-End Object Detection with Learnable Sparsity.
Byungseok Roh, JaeWoong Shin, Wuhyun Shin, Saehoon Kim.
ICLR 2022. [paper] [code]

D^2ETR: Decoder-Only DETR with Computationally Efficient Cross-Scale Attention.
Junyu Lin, Xiaofeng Mao, Yuefeng Chen, Lei Xu, Yuan He, Hui Xue
arxiv 2022. [paper] [code]

CF-DETR: Coarse-to-Fine Transformers for End-to-End Object Detection.
Xipeng Cao, Peng Yuan, Bailan Feng, Kun Niu.
AAAI 2022. [paper]

AdaMixer: A Fast-Converging Query-Based Object Detector.
Ziteng Gao, Limin Wang, Bing Han, Sheng Guo.
CVPR 2022. [paper] [code]

Recurrent Glimpse-based Decoder for Detection with Transformer.
Zhe Chen, Jing Zhang, Dacheng Tao.
CVPR 2022. [paper] [code]

2.4. Object Queries Modifications

Efficient DETR: Improving End-to-End Object Detector with Dense Prior.
Zhuyu Yao, Jiangbo Ai, Boxun Li, Chi Zhang.
arxiv 2021. [paper]

Conditional DETR for Fast Training Convergence.
Depu Meng*, Xiaokang Chen*, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang.
ICCV 2021. [paper] [code] [detrex code]

Anchor DETR: Query Design for Transformer-Based Object Detection.
Yingming Wang, Xiangyu Zhang, Tong Yang, Jian Sun.
AAAI 2022. [paper] [code]

DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR.
Shilong Liu, Feng Li, Hao Zhang, Xiao Yang, Xianbiao Qi, Hang Su, Jun Zhu, Lei Zhang.
ICLR 2022. [paper] [code] [detrex code]

DN-DETR: Accelerate DETR Training by Introducing Query DeNoising.
Feng Li*, Hao Zhang*, Shilong Liu, Jian Guo, Lionel M. Ni, Lei Zhang.
CVPR 2022. [paper] [code] [detrex code]

DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection.
Hao Zhang*, Feng Li*, Shilong Liu*, Lei Zhang, Hang Su, Jun Zhu, Lionel M. Ni, Heung-Yeung Shum
arxiv 2022. [paper] [code] [detrex code]

Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segmentation.
Feng Li*, Hao Zhang*, Huaizhe xu, Shilong Liu, Lei Zhang, Lionel M. Ni, Heung-Yeung Shum.
arxiv 2022. [paper] [code]

DETRs with Hybrid Matching.
Ding Jia, Yuhui Yuan, Haodi He, Xiaopei Wu, Haojun Yu, Weihong Lin, Lei Sun, Chao Zhang, Han Hu
arxiv 2022. [paper] [code]

Group DETR: Fast DETR Training with Group-Wise One-to-Many Assignment
Qiang Chen, Xiaokang Chen, Jian Wang, Haocheng Feng, Junyu Han, Errui Ding, Gang Zeng, Jingdong Wang
arxiv 2022. [paper]

Semantic-Aligned Matching for Enhanced DETR Convergence and Multi-Scale Feature Fusion
Gongjie Zhang, Zhipeng Luo, Yingchen Yu, Jiaxing Huang, Kaiwen Cui, Shijian Lu, Eric P. Xing
arxiv 2022. [paper] [code]

DETRs with Collaborative Hybrid Assignments Training
Zhuofan Zong, Guanglu Song, Yu Liu.
arxiv 2022. [paper] [code]

Dense Distinct Query for End-to-End Object Detection
Shilong Zhang, Xinjiang Wang, Jiaqi Wang, Jiangmiao Pang, Chengqi Lyu, Wenwei Zhang, Ping Luo, Kai Chen.
CVPR 2023. [paper] [code]

3. Performance Analysis

  • Comparison of all DETR-based detection transformers on COCO Val set (a) Performance comparison of detection transformers using a ResNet-50 backbone w.r.t. training epochs. Networks labelled with DC5 take a dilated feature-map. Other networks consider multi-scale features. (b) Performance comparison of detection transformers w.r.t. model size.

  • Comparison of DETR-based detection transformers on COCO Val set for small, medium and large objects (a) Performance comparison of detection transformers on small objects. (b) Performance comparison of detection transformers on medium objects. (c) Performance comparison of detection transformers on large objects.

About

License:Apache License 2.0