Giters
Vaibhavs10
/
fast-whisper-finetuning
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
415
Watchers:
9
Issues:
15
Forks:
33
Vaibhavs10/fast-whisper-finetuning Issues
RuntimeError: expected mat1 and mat2 to have the same dtype, but got: c10::Half != float
Updated
2 months ago
ValueError: A custom logits processor of type <class 'transformers.generation.logits_process.ForceTokensLogitsProcessor'>....
Updated
3 months ago
Increase speed of data loading
Closed
4 months ago
Merge and Unload Peft weights to base model
Closed
10 months ago
Comments count
3
How to invoke compute_metrics ?
Updated
5 months ago
Bitsandbytes error (cuda setup error) google colab
Updated
6 months ago
Comments count
2
Word Error Rate Increasing post training on whisper-large-v3
Updated
6 months ago
Appling SpecAugment while fine tuning.
Updated
7 months ago
TypeError: prepare_model_for_kbit_training() got an unexpected keyword argument 'output_embedding_layer_name'
Closed
7 months ago
Comments count
4
Upcoming release - To-Do
Updated
9 months ago
After using the 200h Portuguese data finetune, the recognition has become much worse?
Updated
a year ago
AttributeError: 'NoneType' object has no attribute 'cget_col_row_stats'
Closed
a year ago
Comments count
2
Max_new_token
Closed
a year ago
Comments count
1
TypeError: prepare_model_for_int8_training() got an unexpected keyword argument 'output_embedding_layer_name'
Closed
a year ago
Comments count
4