Suggest to use `litgpt chat` after finetuning
awaelchli opened this issue · comments
Our finetuning tutorials end with suggesting to run litgpt generate
but no mention of litgpt chat
. I believe chatting is a more intuitive suggestion to immediately use and test the model. So I propose to incluce this as the first step after finetuning in our tutorials.
(this would apply to full and lora only at the moment)
Thoughts @rasbt?
We could even print the commands to the terminal after finetuning ends, so the user can copy paste it.
I agree. I think generate.py
is more useful when you want to execute the end-to-end example in a bash script, but for the general tutorials let's use chat.py
At some point I would like that we merge them. It was originally separate to keep the code complexity low