Giters
Vaibhavs10
/
fast-whisper-finetuning
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
436
Watchers:
9
Issues:
15
Forks:
36
Vaibhavs10/fast-whisper-finetuning Issues
LoRA config
Updated
2 months ago
RuntimeError: expected mat1 and mat2 to have the same dtype, but got: c10::Half != float
Updated
4 months ago
ValueError: A custom logits processor of type <class 'transformers.generation.logits_process.ForceTokensLogitsProcessor'>....
Updated
5 months ago
Increase speed of data loading
Closed
6 months ago
Merge and Unload Peft weights to base model
Closed
a year ago
Comments count
3
How to invoke compute_metrics ?
Updated
7 months ago
Bitsandbytes error (cuda setup error) google colab
Updated
8 months ago
Comments count
2
Word Error Rate Increasing post training on whisper-large-v3
Updated
8 months ago
Appling SpecAugment while fine tuning.
Updated
9 months ago
TypeError: prepare_model_for_kbit_training() got an unexpected keyword argument 'output_embedding_layer_name'
Closed
9 months ago
Comments count
4
Upcoming release - To-Do
Updated
a year ago
After using the 200h Portuguese data finetune, the recognition has become much worse?
Updated
a year ago
AttributeError: 'NoneType' object has no attribute 'cget_col_row_stats'
Closed
a year ago
Comments count
2
Max_new_token
Closed
a year ago
Comments count
1
TypeError: prepare_model_for_int8_training() got an unexpected keyword argument 'output_embedding_layer_name'
Closed
a year ago
Comments count
4