Giters
jina-ai
/
jerboa
LLM finetuning
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
36
Watchers:
11
Issues:
42
Forks:
3
jina-ai/jerboa Issues
Create correct outputs from Falcon by changing the generation configuration
Closed
10 months ago
Comments count
1
Add support for deep speed
Closed
10 months ago
Comments count
1
Incorporate baize data and training into jerboa training pipeline
Closed
10 months ago
Add automatic evaluation with gpt3
Closed
10 months ago
Create an 8bits model for inference out of falcon 40b
Closed
10 months ago
support codegen 1B in our training
Updated
10 months ago
Python QA instruction tuning dataset
Closed
10 months ago
Update HF models
Closed
10 months ago
Comments count
1
Create evaluation harness with ChatGPT
Updated
10 months ago
Support mosaicml dolly_hhrlhf dataset
Updated
10 months ago
Align falcon 40b on code alpaca
Closed
10 months ago
Comments count
1
fixing transformers version
Closed
10 months ago
add redpajama 7b to our pipeline
Closed
10 months ago
Comments count
1
Align Falcon 7b on Lima
Closed
10 months ago
Comments count
2
Align falcon 7b on alpaca
Closed
a year ago
In our evaluation code we found a bug where the max token is just 128.
Closed
10 months ago
There are cases where the model is not stopping or repeats itself. We will try training for longer and see what happens
Closed
10 months ago
Comments count
1
Add dolly 15k instruction dataset
Closed
a year ago
save full weights and upload to hf not just adapters
Updated
a year ago
We need to train alpaca-lora on the same number of lora layers to be able to compare it to falcon 7B and understand the effect of changing from llama to falcon
Closed
a year ago
Align Falcon 7B on Lima
Closed
a year ago
For Falcon, there are cases where the generation outputs an EOS token but does not stop
Closed
a year ago
Add lima dataset to the training pipeline
Closed
a year ago
Comments count
1
Align Falcon 40b on alpaca-lora
Closed
a year ago
Experiment with Lightning fabric, reproduce speed improvement from: https://lightning.ai/pages/community/finetuning-falcon-efficiently/
Closed
a year ago
Compare code aligned model to current SOTA
Closed
a year ago
Fix bug in save_pretrained
Closed
a year ago
Add dockerfile to jerboa to run on runpod
Closed
a year ago
Add red pajamas instruct dataset to our pipeline
Closed
a year ago
Pipeline training dataset refactoring
Closed
a year ago
logs dataset in wandb
Closed
a year ago
Align llama7 on code alpaca
Closed
a year ago
Comments count
1
Add qlora to our current codebase
Closed
a year ago
Comments count
1
Allign llama 7b on alpaca with 4 bits
Closed
a year ago
Long term: create a good evaluation for QA code
Closed
a year ago
Comments count
8
Publish alpaca lora 8 bits on our HF account
Closed
a year ago
Comments count
1
Prepare our code base to be able to ft on code alpaca
Closed
a year ago
Fix evaluation OOM
Closed
a year ago
Paper False promise llm
Closed
a year ago
WandB: Upload artifacts
Closed
a year ago
Comments count
1
WandB: Remove unwante loss chart
Closed
a year ago
Comments count
1
Create a tiny LLama model to run test
Closed
a year ago