ludwig-ai / ludwig

Low-code framework for building custom LLMs, neural networks, and other AI models

Home Page:http://ludwig.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Token-level Probability Always 0.0 When Fine-tuning Llama2-7b Model on Single GPU

MoOo2mini opened this issue · comments

Describe the bug
The token-level probabilities consistently appear as 0.0 when fine-tuning the Llama2-7b model using "Ludwig + DeepLearning.ai: Efficient Fine-Tuning for Llama2-7b on a Single GPU.ipynb".
https://colab.research.google.com/drive/1Ly01S--kUwkKQalE-75skalp-ftwl0fE?usp=sharing

below thing is my code that has a problem...
https://colab.research.google.com/drive/1OmbCKlPzlxm4__iThYqB9PSLUWZZVptz?usp=sharing

To Reproduce
Steps to reproduce the behavior:

  1. Fine-tune the Llama2-7b model using the provided notebook.
  2. Execute the model's predictions using the predict function with modified parameters, including setting skip_save_unprocessed_output to False and providing a specific output_directory.
  3. Despite modifications, the token-level probabilities remain 0.0.
ludwig.predict(
  dataset=None,
  data_format=None,
  split='full',
  batch_size=128,
  skip_save_unprocessed_output=True,
  skip_save_predictions=True,
  output_directory='results',
  return_type=<class 'pandas.core.frame.DataFrame'>,
  debug=False
)

Expected behavior
Token-level probabilities should reflect the model's confidence in predicting each token's output.

Screenshots
N/A

Environment:

  • OS: Ubuntu 20.04
  • Python version: 3.8.10
  • Ludwig version: 0.3.3

Additional context
The logger within the predict function does not seem to function as expected.

스크린샷 2024-04-02 오후 4 45 28