ske159's repositories
llama
Inference code for Llama models
pyllama
LLaMA: Open and Efficient Foundation Language Models
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
AI-For-Beginners
12 Weeks, 24 Lessons, AI for All!
clip-gpt-captioning
CLIPxGPT Captioner is Image Captioning Model based on OpenAI's CLIP and GPT-2.
BEiT-V3
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
open_clip
An open source implementation of CLIP.
finetuner
:dart: Task-oriented finetuning for better embeddings on neural search
gpt-2
Code for the paper "Language Models are Unsupervised Multitask Learners"
GLIP
Grounded Language-Image Pre-training
---
深度学习经典、新论文逐段精读
BLIP
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
CapDec
CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)
CLIP_Cap
coco image caption via CLIP
VisualGPT
CVPR 2022 Proceeding
CoOp-AUTO-Prompt
Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)
COVID-Net
COVID-Net Open Source Initiative
-p3c
Alibaba Java Coding Guidelines pmd implements and IDE plugin
witCLIP-
WIT (Wikipedia-based Image Text) Dataset is a large multimodal multilingual dataset comprising 37M+ image-text sets with 11M+ unique images across 100+ languages.
CLIP_prefix_caption
Simple image captioning model
cog
Containers for machine learning
CPM_Chinese_Gen
Easy-to-use CPM for Chinese text generation(基于CPM的中文文本生成)
CLIP-cross_model_contrastive_Learning
Contrastive Language-Image Pretraining
dino_self_supervised_learning
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
Image_caption_CNN_TransDecoder
ImageCaptionV3
huggingface_hub
All the open source things related to the Hugging Face Hub.
CoreNLP
Stanford CoreNLP: A Java suite of core NLP tools.
WildLifeDataset
iWildCam competition details