horseee / LLM-Pruner

[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support LLaMA, Llama-2, BLOOM, Vicuna, Baichuan, etc.

Home Page:https://arxiv.org/abs/2305.11627

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Sparse Mask question

coldplayers opened this issue · comments

Hi, I have a question about the sparsity of the weight:
After merge lora into sparse weight will change sparse weight into dense?

Hi @coldplayers, LLM-pruner is a structural method. After pruning, we get a dense model.