Lu Yin (luuyin)

luuyin

Geek Repo

Company:University of Aberdeen

Location:UK

Home Page:https://scholar.google.com/citations?user=G4Xe1NkAAAAJ

Github PK Tool:Github PK Tool

Lu Yin's starred repositories

Language:PythonStargazers:1Issues:0Issues:0

OwLore

Official Pytorch Implementation of "OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for Memory-Efficient LLM Fine-tuning" by Pengxiang Li, Lu Yin, Xiaowei Gao, Shiwei Liu

Language:PythonStargazers:13Issues:0Issues:0

Junk_DNA_Hypothesis

"Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity" Lu Yin, Shiwei Liu, Ajay Jaiswal, Souvik Kundu, Zhangyang Wang

Language:PythonStargazers:12Issues:0Issues:0

chase

Dynamic Sparsity Is Channel-Level Sparsity Learner [Neurips 2023]

Language:PythonStargazers:3Issues:0Issues:0

OWL

Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"

Language:PythonLicense:MITStargazers:43Issues:0Issues:0

ILM-VP

[CVPR23] "Understanding and Improving Visual Prompting: A Label-Mapping Perspective" by Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zhang, and Sijia Liu

Stargazers:1Issues:0Issues:0

DSify

Boosting Driving Scene Understanding with Advanced Vision-Language Models

Language:PythonLicense:BSD-3-ClauseStargazers:31Issues:0Issues:0

BERT-Tickets

[NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, Michael Carbin

Language:PythonLicense:MITStargazers:137Issues:0Issues:0

transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Language:PythonLicense:Apache-2.0Stargazers:130059Issues:0Issues:0

Generating-the-simple-shape-dataset

Generate a simple shape dataset with different colors, shapes, thicknesses, and heights.

Language:Jupyter NotebookStargazers:3Issues:0Issues:0

UGTs-LoG

This is the official code for UGTs.

Language:PythonStargazers:11Issues:0Issues:0

mixture-of-experts

A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models

Language:PythonLicense:MITStargazers:584Issues:0Issues:0

mixture-of-experts

PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538

Language:PythonLicense:GPL-3.0Stargazers:913Issues:0Issues:0

git-re-basin-pytorch

Git Re-Basin: Merging Models modulo Permutation Symmetries in PyTorch

Language:PythonLicense:MITStargazers:69Issues:0Issues:0
Language:PythonStargazers:4Issues:0Issues:0
Language:PythonStargazers:8Issues:0Issues:0

DynamicReLU

Implementation of Dynamic ReLU on Pytorch

Language:PythonStargazers:203Issues:0Issues:0

SLaK

[ICLR 2023] "More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity"; [ICML 2023] "Are Large Kernels Better Teachers than Transformers for ConvNets?"

Language:HTMLLicense:MITStargazers:258Issues:0Issues:0

html-resume

A single-page resume template completely typeset with HTML & CSS.

Language:HTMLLicense:Apache-2.0Stargazers:539Issues:0Issues:0

al-folio

A beautiful, simple, clean, and responsive Jekyll theme for academics

Language:HTMLLicense:MITStargazers:10108Issues:0Issues:0

Net2Net

Net2Net implementation on PyTorch for any possible vision layers.

Language:PythonStargazers:38Issues:0Issues:0

Firefly

Offical Repo for Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks. Accepted by Neurips 2020.

Language:PythonLicense:MITStargazers:28Issues:0Issues:0

openrazer

Open source driver and user-space daemon to control Razer lighting and other features on GNU/Linux

Language:CLicense:GPL-2.0Stargazers:3544Issues:0Issues:0

DCTpS

Code for testing DCT plus Sparse (DCTpS) networks

Language:PythonLicense:MITStargazers:13Issues:0Issues:0

Learning-Loss-for-Active-Learning

Reproducing experimental results of LL4AL [Yoo et al. 2019 CVPR]

Language:PythonStargazers:211Issues:0Issues:0

SET-MLP-ONE-MILLION-NEURONS

[Neural Computing and Applications] "Sparse evolutionary Deep Learning with over one million artificial neurons on commodity hardware" by Shiwei Liu, Decebal Constantin Mocanu, Mykola Pechenizkiy

Language:HTMLStargazers:2Issues:0Issues:0

Random_Pruning

[ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy

Language:PythonStargazers:69Issues:0Issues:0

Selfish-RNN

[ICML 2021] "Selfish Sparse RNN Training" by Shiwei Liu, Decebal Constantin Mocanu, Yulong Pei, Mykola Pechenizkiy

Language:PythonStargazers:10Issues:0Issues:0

GraNet

[Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration

Language:PythonStargazers:26Issues:0Issues:0