horseee / LLM-Pruner

[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support LLaMA, Llama-2, BLOOM, Vicuna, Baichuan, etc.

Home Page:https://arxiv.org/abs/2305.11627

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

a post-training issue

cmnfriend opened this issue · comments

Thanks for your nice work!

When I post-train the pruned model by running python post_training.py --prune_model prune_log/pytorch_model.bin --data_path yahma/alpaca-cleaned --output_dir tune_log --wandb_project llama_tune --lora_r 8 --num_epochs 2 --learning_rate 1e-4 --batch_size 64, I meet with the following problem:

wandb.errors.UsageError: api_key not configured (no-tty). call wandb.login(key=[your_api_key])

Could you please tell me how I should deal with that? Thank you!

Hi. You need to first configure the wandb. You can follow the instructions of wandb😄

Thanks for your nice work!

When I post-train the pruned model by running python post_training.py --prune_model prune_log/pytorch_model.bin --data_path yahma/alpaca-cleaned --output_dir tune_log --wandb_project llama_tune --lora_r 8 --num_epochs 2 --learning_rate 1e-4 --batch_size 64, I meet with the following problem:

wandb.errors.UsageError: api_key not configured (no-tty). call wandb.login(key=[your_api_key])

Could you please tell me how I should deal with that? Thank you!

you could use python post_training.py --prune_model prune_log/pytorch_model.bin --data_path yahma/alpaca-cleaned --output_dir tune_log --wandb_project "none" --lora_r 8 --num_epochs 2 --learning_rate 1e-4 --batch_size 64 to run it