CEA-LIST / MT-DETR

Official resources for "Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient?", IEEE/CVF WACV, 2023.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MT-DETR

Official resources for "Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient?", IEEE/CVF WACV, 2023.

Citation

If you find this repository useful for your own work, please cite our paper:

Q. Bouniot, A. Loesch, R. Audigier, A. Habrard, "Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient?", accepted in IEEE/CVF WACV, Jan. 2023.

@InProceedings{bouniot2023wacv,
  TITLE = {{Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient?}},
  AUTHOR = {Bouniot, Quentin and Loesch, Angelique and Audigier, Romaric and Habrard, Amaury},
  BOOKTITLE = {{IEEE/CVF WACV}},
  YEAR = {2023},
  MONTH = Jan,
}

About

Official resources for "Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient?", IEEE/CVF WACV, 2023.