Adapters for finetuning large models on low-end systems
xloem opened this issue · comments
People should be aware of the research and tools at https://github.com/adapter-hub/adapter-transformers . They place small bottlenecks between model layers and freeze the pretrained weights and train them to compose specific skillsets together. This would be good for personal coding styles or changes like refactoring, commenting, or bugfixing.
I don't understand how would that be helpful in the context of fauxpilot. Can you please elaborate a little bit more how could this applied to learn refactoring changes.
Adapters basically provide for lightweight finetuning. They're a powerful tool people on lower end systems would ideally have access to.
For example, if you wanted a model feature for converting from one code form to another, you would train the adapter on examples of the transformation just like any other model, either from existing code or using prompts to augment or curate data in a partially supervised way.
A simpler thing you can do is personalise the model by training an adapter on your own code in general or code you respect; or severely strengthen it by training it only on the language you are writing in; or add comprehension of missing contextual information by defining a data presentation norm for it; add direct production of patch files; train only on generation of comments by removing them; etc. [edit: you could also train an adapter to predict your keystrokes and edits much better]
When an architecture is trained for a specific task this way, it becomes much stronger at that task. Adapters are only a few megabytes large, so they are quickly hot-swappable and composable.
Something else I have been trying a little is using architectures that accept longer input sequences, an orthogonal concept, which lets one include neighboring files in the input and is also very powerful.
Looks like an interesting idea. One hurdle is that we would have to find a way to translate the adapters into FasterTransformer as well, since that's what we currently use for making inference fast.