Jiancheng Liu's starred repositories
bigcode-evaluation-harness
A framework for the evaluation of autoregressive code generation language models.
stable-diffusion-webui
Stable Diffusion web UI
llvm-project
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
Unlearn-WorstCase
"Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, Sijia Liu
UnlearnCanvas
UnlearnCanvas: A Stylized Image Dataaset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jiancheng Liu, Xiaoming Liu, Sijia Liu
Unlearn-Saliency
[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu
awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
awesome-machine-unlearning
Awesome Machine Unlearning (A Survey of Machine Unlearning)
doppelgangers
Doppelgangers: Learning to Disambiguate Images of Similar Structures
Unlearn-Sparse
[NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu
natural-adv-examples
A Harder ImageNet Test Set (CVPR 2021)
ImageNet-Sketch
ImageNet-Sketch data set for evaluating model's ability in learning (out-of-domain) semantics at ImageNet scale
texture-vs-shape
Pre-trained models, data, code & materials from the paper "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness" (ICLR 2019 Oral)
robustness
Corruption and Perturbation Robustness (ICLR 2019)
ood-benchmarks
Out-of-distribution generalization benchmarks for image recognition models
visual_prompting
Official implementation and data release of the paper "Visual Prompting via Image Inpainting".
open_flamingo
An open-source framework for training large multimodal models.