Additional config details from the hugging face checkpoints
darius522 opened this issue · comments
Hi there,
Thanks a bunch for your effort on that. This is fantastic work.
I was wondering if you could provide a bit more details about the configuration you've used in training the checkpoints you've provided on hugging face ? They sound great and I'd like to re-train them for my own purpose. From their file names, I can infer the following: batch_size=12
, tensor_cut=100000
, and lr=0.0001
, is this right ? What about warmup_epoch
, for example ? Additionally, did you use only a subset of LibriTTS or the full 960 hours ?
Thanks again !
First, I used full 960h LibriTTS to train the codec model.
Second, batch_size=12, tensor_cut=100000, and lr=0.0001
More detailed config information, I will check the node( I wish I hadn't deleted them @darius522