VITA's repositories
NeuralLift-360
[CVPR 2023, Highlight] "NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360° Views", Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Yi Wang, Zhangyang Wang
ViT-Anti-Oversmoothing
[ICLR 2022] "Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice" by Peihao Wang, Wenqing Zheng, Tianlong Chen, Zhangyang Wang
CADTransformer
[CVPR 2022]"CADTransformer: Panoptic Symbol Spotting Transformer for CAD Drawings", Zhiwen Fan, Tianlong Chen, Peihao Wang, Zhangyang Wang
Unified-LTH-GNN
[ICML 2021] "A Unified Lottery Tickets Hypothesis for Graph Neural Networks", Tianlong Chen*, Yongduo Sui*, Xuxi Chen, Aston Zhang, Zhangyang Wang
ChainCoder
📜 [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, Kevin Wang, Yihan Xi, Dejia Xu, Zhangyang Wang
SteinDreamer
“SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity” by Peihao Wang, Zhiwen Fan, Dejia Xu, Dilin Wang, Sreyas Mohan, Forrest Iandola, Rakesh Ranjan, Yilei Li, Qiang Liu, Zhangyang Wang, Vikas Chandra
Graph-Mixture-of-Experts
[NeurIPS'23] Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit Diversity Modeling. Haotao Wang, Ziyu Jiang, Yuning You, Yan Han, Gaowen Liu, Jayanth Srinivasa, Ramana Rao Kompella, Zhangyang Wang
ppt2script
Auto Generate Speech Script From PPT - Based on ChatGPT
Data-Efficient-Scaling
[ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang
ramanujan-on-pai
[ICLR 2023] 'Revisiting Pruning At Initialization Through The Lens of Ramanujan Graph" by Duc Hoang, Shiwei Liu, Radu Marculescu, Atlas Wang
Trap-and-Replace-Backdoor-Defense
[NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong, Aston Zhang, Jiayu Zhou, Zhangyang Wang
graph_ladling
[ICML2023] Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication. Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, and Zhangyang Wang
instant_soup
[ICML2023] Instant Soup Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models. Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, and Zhangyang Wang
essential_sparsity
[NeurIPS 2023] "The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter", Ajay Jaiswal, Shiwei Liu, Tianlong Chen, and Zhangyang Wang
Chasing-Better-DIPs
[TMLR] "Chasing Better Deep Image Priors between Over- and Under-parameterization" Qiming Wu, Xiaohan Chen, Yifan Jiang, Zhangyang Wang
Bi-RPT
[CPAL 2024] "Cross-Quality Few-Shot Transfer for Alloy Yield Strength Prediction: A New Materials Science Benchmark and A Sparsity-Oriented Optimization Framework" by Xuxi Chen, Tianlong Chen, Everardo Yeriel Olivares, Kate Elder, Scott K. McCall, Aurelien Pierre Philippe Perron, Joseph T. McKeown, Bhavya Kailkhura, Zhangyang Wang, Brian Gallagher
QuantumSEA
[QCE 2023]"QuantumSEA: In-Time Sparse Exploration for Noise Adaptive Quantum Circuits" Tianlong Chen, Zhenyu Zhang, Hanrui Wang, Jiaqi Gu, Zirui Li, David Z Pan, Frederic T Chong, Song Han, Zhangyang Wang
FullSpectrum-PAI
[NEURIPS'23] "Don’t Just Prune by Magnitude! Your Mask Topology is Another Secret Weapon" Duc Hoang, Souvik Kundu, Shiwei Liu, Zhangyang Wang