naplab / DANet

Deep Attractor Network (DANet) for single-channel speech separation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The trained model is not working well.

pengweixiang opened this issue · comments

Thanks very much for your code! That really help my a lot. I generated the data set according to your paper requirements. I applied the data to the training model code you provided and found that it did not achieve the expected results. The Mixed audio file cannot separate even a little. The separated audio file has only the difference in sound size from the original file.

Thanks very much for your code! That really help my a lot. I generated the data set according to your paper requirements. I applied the data to the training model code you provided and found that it did not achieve the expected results. The Mixed audio file cannot separate even a little. The separated audio file has only the difference in sound size from the original file.

hi! I want to know how to get the dataset, and I can't get the dataset. then,I import the dataset which I made by myself failed,and get the error like this:"
OSError: Unable to open file (unable to open file: name = 'C:\Users\YJ.anaconda\trian', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)" I don't konw how to solve this problem.
Thank you for your time! your prompt reply will be highly appreciated!

Thanks very much for your code! That really help my a lot. I generated the data set according to your paper requirements. I applied the data to the training model code you provided and found that it did not achieve the expected results. The Mixed audio file cannot separate even a little. The separated audio file has only the difference in sound size from the original file.

hello! May I ask that how to convert the dataset "timit" to HD5F,the question is bothering me...

what I should do to solve it, thank you for your reply.

Thanks very much for your code! That really help my a lot. I generated the data set according to your paper requirements. I applied the data to the training model code you provided and found that it did not achieve the expected results. The Mixed audio file cannot separate even a little. The separated audio file has only the difference in sound size from the original file.

hello,I come here again,I only have timit dataset,so how to implement this code? your prompt reple will be highly appreciated!!!

Thanks very much for your code! That really help my a lot. I generated the data set according to your paper requirements. I applied the data to the training model code you provided and found that it did not achieve the expected results. The Mixed audio file cannot separate even a little. The separated audio file has only the difference in sound size from the original file.

hello,I come here again,I only have timit dataset,so how to implement this code? your prompt reple will be highly appreciated!!!

You can generate the data of the target field according to the paper provided by the author, and confirm that the version of the corresponding pytorch should be able to run smoothly. Because of the limitations of this project, I temporarily gave up and continued to follow up on this project. But the idea of the project is good.

@pengweixiang Hi, are you able to figure out the reason why it did not work?