No computation advantage in parallelizing deepspeech with torch?
awsomecod opened this issue · comments
awsomecod commented
The function inference(Input) uses deepspeech to transcript audio files. I need to run inference() function for 10 different inputs. I wrote the following code to perform these 10 different runs in parallel. The code works but I don't gain any advantage with respect to computation time. Why?
import torch
processes = []
for i in range(0,10):
p = torch.multiprocessing.Process(target=inference, args=(Input[i],))
p.start()
for p in processes:
p.join()
A simplified version of inference()
function is as follows:
def Inference:
ds=Model('./deepspeech-0.9.3-models.pbmm')
speech=ds.stt(audio)
I use nvidia GPU and run deepspeech on GPU.
Ubuntu 20.04
Python 3.8.10
GPU: nvidia
Cuda 10.1
lissyx commented
this is not a bug