nlpxucan / WizardLM

LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Fine-tune WizCoder on private repos

bishwenduk029 opened this issue · comments

Are there any instructions or documentations on how WizCoder-15B, can be fine-tuned on the private code repos to better help with understanding the code repos and also help with context aware auto-completes. It could have more use cases like automatic PR code reviews etc.

This is an excellent research direction, but we haven't focused much on this topic.

Being able to train on the readthedocs and source of the libraries you care most about would be such a helpful improvement and would probably make WizardCoder more effective than even the super large proprietary models.

What steps are required to inject new knowledge into the WizardCoder model? Would fine tuning be enough, or would it be necessary to do a full retrain?