Different multi speaker dataset
narcise opened this issue · comments
Hi I'm trying to use your code for different multi-speaker dataset. I want to have global conditioning for different speaker IDs and local conditioning for another signal looks like speech signal. Which way do you recommend? changing the prototype functions (nnmnkwii) or changing your function inside wavenet_vocoder for loading the data set (like cmu_arctic)?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.