Giters
FreddeFrallan
/
Multilingual-CLIP
OpenAI CLIP text encoders for multiple languages!
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
739
Watchers:
19
Issues:
27
Forks:
69
FreddeFrallan/Multilingual-CLIP Issues
have you tried some new LLMs like vicuna-13b?
Closed
8 months ago
Comments count
2
Confuse of motivation
Updated
9 months ago
Comments count
6
cannot load postTransformation weight
Updated
9 months ago
Evaluation code for the models ( Txt2Img @10-Recal )
Updated
10 months ago
Have you tested these models on ImageNet?
Updated
a year ago
Compatibility with torch.compile
Updated
a year ago
Support for clip-vit-large-patch14-336
Updated
a year ago
Comments count
1
mismatched number of supported languages
Updated
a year ago
how convert the model to onnx
Updated
a year ago
License
Closed
a year ago
Comments count
1
model_type 'M-CLIP' is not in CONFIG_MAPPING
Updated
a year ago
Comments count
2
Data leak
Updated
a year ago
Comments count
2
1024 dim embedding model needed
Closed
2 years ago
Comments count
4
how to convert tf model to torch model
Updated
2 years ago
Bibtex Citation
Closed
2 years ago
Comments count
1
XLM-Roberta Feature Request
Closed
2 years ago
Comments count
1
fix pypi release
Closed
2 years ago
Comments count
2
Replace all occurrence of mclip by multilingual-clip
Closed
2 years ago
Comments count
1
Training a model for ViT-L/14 image embeddings
Closed
2 years ago
Comments count
1
Code of translating and generating the CLIP embedding
Updated
2 years ago
Comments count
2
some confuse for "Pre-trained CLIP-Text encoders for multiple languages"
Updated
2 years ago
Comments count
1
some questisons about finetune
Updated
2 years ago
Comments count
1
Issue in M-Bert-Base-ViT-B clip head linear layer size
Updated
3 years ago
Comments count
2
About making inferences
Updated
3 years ago
Support ViT-B/32 of Vision Model
Closed
3 years ago
Comments count
5
Training (fine-tuning) code
Closed
3 years ago
Comments count
3