Jiuniu Wang's repositories
adversarial_training
Pytorch implementation of the methods proposed in **Adversarial Training Methods for Semi-Supervised Text Classification** on IMDB dataset
ARLDM
Official Pytorch Implementation of Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models
ASTRAL
ASTRAL: adversarial trained LSTM-CNN for named entity recognition
attribute-label-embedding
An Implementation of Attribute Label Embedding (ALE) method of Zero-Shot Learning
Chinese-ChatLLaMA
中文LLaMA基础模型;中文ChatLLaMA对话模型;NLP预训练/指令微调数据集
Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU部署 (Chinese LLaMA & Alpaca LLMs)
clip-gpt-captioning
CLIPxGPT Captioner is Image Captioning Model based on OpenAI's CLIP and GPT-2.
CLIP4Clip
An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"
DiscCaptioning
Code for Discriminability objective for training descriptive captions(CVPR 2018)
Hackintosh-T450s
A record for installing Hackintosh on the T450s laptop. And it is also useful for other devices.
CLIP_prefix_caption
Simple image captioning model
CogVideo
Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
CVPR2023-DMVFN
CVPR2023 (highlight) - A Dynamic Multi-Scale Voxel Flow Network for Video Prediction
dalle2-laion
Pretrained Dalle2 from laion
diffusion-image-captioning
implementation of paper https://arxiv.org/abs/2210.04559
FastChat
The release repo for "Vicuna: An Open Chatbot Impressing GPT-4"
Gen-L-Video
The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".
instant-ngp
Instant neural graphics primitives: lightning fast NeRF and more
llama
Inference code for LLaMA models
llama-train
User-friendly LLaMA: Train or Run the model using PyTorch. Nothing else.
Markdown
Markdown 基本语法。
SatMAE
Official code repository for NeurIPS 2022 paper "SatMAE: Pretraining Transformers for Temporal and Multi-Spectral Satellite Imagery"
show-control-and-tell
Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions. CVPR 2019
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
ViT-pytorch
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)
WebGame_KissFish
An easy WebGame called KissFish (canvas used).