tvergho / llm-neurips-train

5th place submission to the 2023 NeurIPS LLM Efficiency Challenge.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

license Python Version build

NeurIPS Large Language Model Efficiency Challenge Submission

NeurIPS ran a Large Language Model Efficiency Challenge in 2023. This was my evaluation submission. The training period was limited to 24 hours on a single 24GB GPU. This submission achieved 5th place in the final evaluation for the 4090 track, on both open and closed test sets.

To start the server for the neurips/local model, build and run the Dockerfile on any device with a NVIDIA GPU. The model was trained and tested on an RTX 4090 with 24GB of memory. The server will start on port 80 by default.

Evaluation can then be performed using HELM, specifying the neurips/local model. The neurips/local model is a finetuned mistral-7b model trained on a combination of the LIMA and Open-Platypus datasets.

DoLa was implemented as a decoding technique to boost the performance of the model on TruthfulQA.

Training Reproduction

All training-related code is in the training directory. Building and running the Dockerfile should start the training loop. After that completes, the model file should be output in the training/lit-gpt/out directory.

The datasets used are LIMA and specific components from Open-Platypus. The only sources from Open-Platypus that were used are ScienceQA, SciBench, ReClor, TheoremQA, ARB, and Guanaco, which are all human-generated and/or were clarified to fall within the scope of the competition rules.

About

5th place submission to the 2023 NeurIPS LLM Efficiency Challenge.


Languages

Language:Python 98.7%Language:Jupyter Notebook 0.6%Language:Dockerfile 0.6%Language:Shell 0.2%