EleutherAI / gpt-neox

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

Home Page:https://www.eleuther.ai/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Finetune

liuxinxin123 opened this issue · comments

Can we support instruction finetune?

Nothing special is required to do instruction-finetuning. It's the same as normal finetuning, just with different data.

if you're interested in RLHF, that's not something we plan on supporting, however there are libraries such as trl and trlX that support this. The later library has a WIP integration with GPT-NeoX