lowlypalace / global-decoding

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Global vs Local Decoding for Large Language Models

This repository provides a Python script for generating text sequences using various large language models (LLMs), conducting Markov Chain Monte Carlo (MCMC) analysis, and evaluating the generated sequences. The script explores different decoding strategies, focusing on local versus global normalization techniques to understand their effects on the quality and diversity of the generated text.

Requirements

  • Python 3.8.6 or higher

Installation

Clone the repository to your local machine:

git clone https://github.com/lowlypalace/global-decoding.git
cd global-decoding

Install the required Python packages:

pip install -r requirements.txt

Usage

The script can be run from the command line with various arguments to customize the text generation, MCMC analysis, and evaluation process.

Basic Command

python main.py \
  --top_k 100 \
  --sequence_count 1000 \
  --batch_size_seq 32 \
  --batch_size_prob 16 \
  --model_name gpt2-medium \
  --mcmc_num_samples 100 \
  --eval_num_sequences 100 \
  --seed 0

By default, all of the actions are performed in the following order: generate_seqs, compute_probs, run_mcmc, run_eval_mauve,run_eval_bleu. The actions to run can be specified using the --actions argument. The actions depend on each other, so the order of the actions should be preserved. Specifically, run_eval_bleu does not depend on run_eval_mauve, but both require run_mcmc to have been completed first.

The available actions can be seen by running python main.py --help.

Resume Computation

If the sequences were generated, but some of the subsequent steps (e.g. BLEU evaluation) failed or timed out, the task could be resumed as follows:

python main.py \
--preload_dir 562fb1 \
--model_name pythia-1.4b \
--actions run_eval_bleu

The other arguments will be fetched from the metadata.json file in the --preload_dir directory.

Arguments

Run python main.py --help to see all available arguments and their descriptions.

Testing

To run the tests, use the following command:

  python -m unittest

Output

Outputs are saved in the specified --output_dir directory, including generated sequences, logs, and evaluation results.

Linting

To lint the code, use the following command:

black .

Contributing

Contributions to this project are welcome! Please fork the repository, make your changes, and submit a pull request.

About


Languages

Language:Python 98.2%Language:Shell 1.8%