lilujunai / ILF-for-code-generation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Improving Code Generation by Training with Natural Language Feedback

Authors: Angelica Chen, Jérémy Scheurer, Tomasz Korbak, Jon Ander Campos, Jun Shern Chan, Samuel R. Bowman, Kyunghyun Cho, Ethan Perez

This repository contains the code and data (human-written feedback and refinements) for running the Imitation learning from Language Feedback (ILF) algorithm for code generation from "Improving Code Generation by Training with Natural Language Feedback" by Chen et al. (2023).

Installation

Our code relies upon the jaxformer repository and open-source CodeGen-Mono checkpoints.

To install all dependencies and download the necessary model checkpoints:

git clone git@github.com:nyu-mll/ILF-for-code-generation.git
cd ILF-for-code-generation
conda env create -f environment.yml

# To download codegen-6B-mono
wget -P checkpoints https://storage.googleapis.com/sfr-codegen-research/checkpoints/codegen-6B-mono.tar.gz && tar -xvf checkpoints/codegen-6B-mono.tar.gz -C checkpoints/

In our paper we use the Codegen-Mono 6B checkpoint, but you can easily replace the above wget command with the download links for the other CodeGen models.

To run the ILF pipeline

To run the ILF pipeline using our dataset, run (from this directory):

source ilf_pipeline.sh -d $(pwd) -n <EXPERIMENT_NAME>

with <EXPERIMENT_NAME> replaced with the name of the subdirectory that you wish to store results in.

About

License:MIT License


Languages

Language:Python 90.5%Language:Shell 9.5%