AILab-CVC / YOLO-World

[CVPR 2024] Real-Time Open-Vocabulary Object Detection

Home Page:https://www.yoloworld.cc

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Roadmap of YOLO-World

wondervictor opened this issue · comments

This issue will be kept open and pinned for a long time, as we hope to hear everyone's opinions, suggestions, and needs!
We want to make YOLO-World stronger and encourage more diverse applications, especially practical ones. We maintain an open and free attitude. YOLO-World is currently in active development and improvement, and we are trying our best to do well in upstream pre-training and downstream deployment tools. At present, our manpower is limited, so we hope you can give us some time and contribute your experience or help when you can!

If you have a good idea or need, just reply to this issue and @ me. I will respond promptly when I see it, and consider adding it to the TODO list.

这个issue将会长时间保持开放并置顶,因为我们希望听到大家的意见、建议和需求!
我们希望让YOLO-World变得更强大,并鼓励更多样化的应用,尤其是实际应用。我们保持开放和自由的态度。YOLO-World目前正处于积极的开发和改进阶段,我们正在尽最大努力做好上游预训练和下游部署工具。目前,我们的人力有限,因此希望大家能给我们一些时间,并在可以的时候贡献您的经验或帮助!

如果您有好的想法或需求,请回复此问题并@我。我看到后会及时回应,并考虑将其加入待办事项列表。

TODO List (Community Version)

🎯: High priority or on-going.

  • Optimize torch.enisum (👍 thank @taofuyu for #118)
  • Support more language models, CLIP-Large (high priority), BEIT-3 (@mio410), and T5-Encoder.
  • Support image prompts (#102)
  • #141
  • Results on ODinW (#98).
  • 🎯 Fix ONNX bugs & ONNX demo & ONNX detailed documentations (#27 #33 #77 #50).
  • TensorRT export & TensorRT demo & TensorRT documentations (#29).
  • 🎯 Fine-tune more 1280-resolution pre-trained models (#142).
  • 🎯 Fine-tune with bad results on COCO without mask-refine (#160 #72 #76).
  • 🎯 Evaluate open-vocabulary/zero-shot capability after fine-tuning or prompt-tuning (#78 #154).
  • 🎯 Demo with image prompts (#208).
  • Optimize training pipelines to improve resource utilization (#165).
  • Batch/distributed inference (#246 #253)
  • Video inference (#182 #263)
  • ONNX with text inputs & text embeddings (#285)

torch.einsum() should be replaced by torch.matmul() and torch.sum(), because einsum() is not supported by most edge devices.
For example, I rewrite the code:
x = torch.einsum('bchw,bkc->bkhw', x, w)
to
batch, channel, height, width = x.shape
_, k, _ = w.shape
x = x.permute(0, 2, 3, 1) # bchw->bhwc
x = x.reshape(batch, -1, channel) # bhwc->b(hw)c
w = w.permute(0, 2, 1) # bkc->bck
x = torch.matmul(x, w)
x = x.reshape(batch, height, width, k)
x = x.permute(0, 3, 1, 2)
Maybe it is ugly, but it can be deployed.
@wondervictor

@taofuyu Good idea, Got it!

@wondervictor May I ask where should I modify if I want to try using the effect of other text encoders, such as changing the text encoder of CLIP to BEIT-3. Thank you!

@mio410 Good idea, we do plan to use better and stronger text encoders (e.g., CLIP-Large) now and we are queuing for computation resources to pre-train it. BEIT-3 is a good choice and we are considering it. BTW, what model size are you most in need of currently? I can prioritize that.

@wondervictor May I ask where should I modify if I want to try using the effect of other text encoders, such as changing the text encoder of CLIP to BEIT-3. Thank you!
Besides, I'd like to try using a CLIP model in a different language to see if I can use prompts in that language for open vocabulary detection. Is this possible?

@mio410 Good idea, we do plan to use better and stronger text encoders (e.g., CLIP-Large) now and we are queuing for computation resources to pre-train it. BEIT-3 is a good choice and we are considering it. BTW, what model size are you most in need of currently? I can prioritize that.

I'm looking forward to your work! If possible, I'd like to try open vocabulary detection in other languages. Could you help me with that?

@wondervictor May I ask where should I modify if I want to try using the effect of other text encoders, such as changing the text encoder of CLIP to BEIT-3. Thank you!

here

Yolo World is based on the word embedding of clip for reparameterization. If we could replace clip with a larger model similar to ChatGPT4, would it understand more? similar to Sora's powerful ability to understand images.

Yolo World is based on the word embedding of clip for reparameterization. If we could replace clip with a larger model similar to ChatGPT4, would it understand more? similar to Sora's powerful ability to understand images.

Hi @dikapiliao1, it's a nice idea and we plan to do it.

如果我想要更改不同的視覺的backbone要在哪裡可以更改?

如果我想要更改不同的視覺的backbone要在哪裡可以更改?

@xianhonghuang replace the image_model config according to your demand:

backbone=dict(
    _delete_=True,
    type='MultiModalYOLOBackbone',
    image_model={{_base_.model.backbone}},
    text_model=dict(
        type='HuggingCLIPLanguageBackbone',
        model_name=text_model_name,
        frozen_modules=['all'])),

如果我想更改不同的主幹線要在哪裡可以更改?

@xianhonghuangimage_model根據您的需求替換配置:

backbone=dict(
    _delete_=True,
    type='MultiModalYOLOBackbone',
    image_model={{_base_.model.backbone}},
    text_model=dict(
        type='HuggingCLIPLanguageBackbone',
        model_name=text_model_name,
        frozen_modules=['all'])),

像是更改_base_ = ('../../third_party/mmyolo/configs/yolov8/'
'yolov8_l_syncbn_fast_8xb16-500e_coco.py')這部分嗎?
我想要先更改成Yolov7的backbone

Hi @xianhonghuang, you can directly override the backbone dictionary configs, e.g., change it to YOLOv7Backbone. BTW, it's suggested to open a new issue to discuss this question and this issue aims for new features and suggestions.

config:yolo_world_v2_xl_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_minival.py is not suit for its model weights

@RudyCheng, it has been resolved.

[target detection on document images], Are there any specialized optimization strategies or support for target detection in vertical domains, specifically for document images such as invoices and passports?