descriptinc / cargan

Official repository for the paper "Chunked Autoregressive GAN for Conditional Waveform Synthesis"

Home Page:https://maxrmorrison.com/sites/cargan

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

inference speed

forwiat opened this issue · comments

Hello @maxrmorrison , thanks for your contribution about cargan.
In my experiment of cargan, the model inference speed is slow although it was setted AUTOREGRESSIVE=False. I wonder that how to speed up cargan to close to paper's speed?

The inference speed is dependent on the compute you are using. If you want comparable results to those given in the paper, make sure your compute is comparable to the compute specifications given in the paper. As well, the benchmark given in the paper is only for the forward pass of the network. It does not include time loading, preprocessing, or saving results. See https://github.com/descriptinc/cargan/blob/master/cargan/evaluate/subjective/__main__.py.

The paper said "we can easily improve generation speed at the cost of reduced training speed, increased memory usage, and slightly increased pitch error by changing the chunk size", How to set to make the model be improved as fast as HiFi-GAN_V2, and generate comparable tone quality?