Incredible! How to run inference on a custom file?
youssefavx opened this issue · comments
Super impressed by your results! Curious to know how I could run a sample audio file through your model to upsample it. It seems the code provided here simply evaluates the model: https://github.com/haoheliu/ssr_eval/tree/main/examples/NVSR
I'll try to figure it out from that but would love any help whatsoever. No pressure whatsoever if busy though!
Also does this require fine-tuning on a custom voice for good results?
Okay I think I figured it out, please let me know if I'm using it incorrectly though:
import soundfile as sf
if(torch.cuda.is_available()): device = "cuda"
else: device="cpu"
testee = NVSRPostProcTestee(device)
x, _ = librosa.load("Sample.wav", sr=44100)
result = testee.infer(x)
sf.write('result.wav', result, 44100)
Yes, that's the way it works :)
Okay I think I figured it out, please let me know if I'm using it incorrectly though:
import soundfile as sf if(torch.cuda.is_available()): device = "cuda" else: device="cpu" testee = NVSRPostProcTestee(device) x, _ = librosa.load("Sample.wav", sr=44100) result = testee.infer(x) sf.write('result.wav', result, 44100)
Yes, that's how it works :)