Giters
UX-Decoder
/
LLaVA-Grounding
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
285
Watchers:
15
Issues:
23
Forks:
10
UX-Decoder/LLaVA-Grounding Issues
Link to config files are down
Updated
3 months ago
Comments count
1
Demo is down
Updated
3 months ago
Comments count
1
Evaluation on refcoco datasets.
Updated
3 months ago
Comments count
1
flick dataset has not the attribute segmentation error
Updated
4 months ago
Can not find files like 'coco/annotations/instances_train2017_refcoco.json' and 'coco/annotations/grounding_train2017_instruct.json'.
Updated
4 months ago
Comments count
1
Missing dataset for pretraining
Closed
5 months ago
Comments count
4
pretrain_joint miss the pretrain_mm_mlp_adapter
Updated
4 months ago
Visual prompt `pretrain weight` does not match
Updated
4 months ago
How to use `gd_ls` ?
Closed
5 months ago
Comments count
1
Could you please release the checkpoints for stages 1 and 2?
Updated
4 months ago
The trouble of visual prompt
Closed
5 months ago
Comments count
1
Where is the inference code?
Closed
5 months ago
Comments count
2
When will the code and dataset be released?
Closed
5 months ago
Comments count
1
online demo returns error
Closed
5 months ago
Comments count
1
Where mask2former/modeling/pixel_decoder/ops ?
Closed
5 months ago
Comments count
1
Visual Grounding Prompt
Closed
6 months ago
Comments count
3
Release of training data and training code
Closed
6 months ago
Comments count
1
where can I find the details about "the implementation and LLM prompts used across different pipeline levels'' mentioned in section 4?
Closed
6 months ago
Online demo failed
Closed
6 months ago
Comments count
1