mpc001 / Lipreading_using_Temporal_Convolutional_Networks

ICASSP'22 Training Strategies for Improved Lip-Reading; ICASSP'21 Towards Practical Lipreading with Distilled and Efficient Models; ICASSP'20 Lipreading using Temporal Convolutional Networks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Reproduce on LRW1000,only get 38.6,how can get 41.4?

shibefore opened this issue · comments

HI,I have reproduce on LRW1000.
when I reproduce with batchsize=32,lr=1e-3, and the same optimizer
for bigru can get 38.4
for mutitcn can get 38.6

How can I get 41.4,can you share your trainning parameters?

when reproduce the other paper "LEARN AN EFFECTIVE LIP READING MODEL WITHOUT PAINS"
for bigru can get 57.68 (paper result is 55.7)
for mutitcn can get 55.49

Hi,

The paper you mentioned made an improvement on LRW-1000. In that paper, I saw them used 40-frame sequence for training and testing. For each sequence, the targeted word is always located in the centre of the sequence.

In contrast, we segmented sequences given the annotations without any padding. The average duration for each sequence is 0.3 seconds (8 frames), which is less than 40 frames. Given both results, it indicates that the contextual information around the targeted word might be helpful.

Regarding the pre-processing, we followed "LRW-1000: A Naturally-Distributed Large-Scale Benchmarkfor Lip Reading in the Wild" to resize the cropped mouth ROIs to 122x122.

For training, we used batchsize=16 and learning rate(lr)=1.5e-4 not batchsize=32, lr=1e-3. We used Adam (weight_decay=1e-4) to train the model for 80 epochs and the learning rate is decayed by cosine scheduler without a warmup stage.

could you please release the pretrain weights trained on LRW-1000 datasets?