horseee / LLM-Pruner

[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support LLaMA, Llama-2, BLOOM, Vicuna, Baichuan, etc.

Home Page:https://arxiv.org/abs/2305.11627

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

hi, Does post_training support full parameter fine-tuning of the pruned model?

StevensPrime opened this issue · comments

hi, Does post_training support full parameter fine-tuning of the pruned model?

Hi. The released code does not support full parameter finetuning.
But you could try removing the related code for peft in post_training.py to make it work for full parameter fine-tuning.