Zhijie Zheng's starred repositories
GeoDiffusion
Official PyTorch implementation of GeoDiffusion in ICLR 2024 (https://arxiv.org/abs/2306.04607)
active-learning-detect
Active learning + object detection
GroundingDINO
Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
AgML
AgML is a centralized framework for agricultural machine learning. AgML provides access to public agricultural datasets for common agricultural deep learning tasks, with standard benchmarks and pretrained models, as well the ability to generate synthetic data and annotations.
CVinW_Readings
A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''
gigagan-pytorch
Implementation of GigaGAN, new SOTA GAN out of Adobe. Culmination of nearly a decade of research into GANs
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
gpt_academic
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, moss等。
segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
each-country-as-a-pokemon-stable-diffusion
Stable Diffusion (fine-tuned on Pokemon [1]), is used to generate a Pokemon for each country. [1] https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning
SegLossOdyssey
A collection of loss functions for medical image segmentation
stable-diffusion-webui
Stable Diffusion web UI
denoising-diffusion-pytorch
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
x-transformers
A simple but complete full-attention transformer with a set of promising experimental features from various papers
mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch