Giters
primepake
/
wav2lip_288x288
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
532
Watchers:
18
Issues:
149
Forks:
136
primepake/wav2lip_288x288 Issues
structure and arrangement of dataset before executing any scrip with flow of script execution!!!!!!
Updated
2 days ago
Why can’t training start?
Updated
7 days ago
Comments count
3
Syncnet loss does not converge
Updated
21 days ago
Comments count
21
train_syncnet_sam.py加载无反应
Updated
a month ago
Comments count
5
Use HuBERT features to train SyncNet, the loss does not converge.
Updated
a month ago
Comments count
2
Inference Not working
Updated
2 months ago
Comments count
7
how to use syncnet_python and training steps
Updated
2 months ago
Comments count
12
python3 train_syncnet_sam.py
Updated
2 months ago
Comments count
8
DINet
Updated
2 months ago
Comments count
1
这个和普通的easyw字幕交换网站lip有什么区别
Updated
2 months ago
dataset
Updated
2 months ago
DINet implementation
Updated
2 months ago
Comments count
1
Could you share your syncnet checkpoint in the avspeech ?
Updated
2 months ago
Comments count
3
wav2lip sam
Updated
2 months ago
Comments count
1
Training failed. The lip shape of a character cannot change according to changes in speech
Updated
2 months ago
Comments count
6
also around the 0.69 on 240000 steps
Updated
2 months ago
Comments count
2
Find friends who are training models and share ideas with them.Welcome
Updated
2 months ago
Comments count
3
Model copying reference frames as it is
Closed
5 months ago
Comments count
5
Why my train loss after introducing sync loss?
Updated
2 months ago
Comments count
4
How to train
Closed
3 months ago
Comments count
6
Generated bottom half face always blur.
Updated
2 months ago
Comments count
2
do inference
Closed
2 months ago
Hi sir, I am a beginner and I would like to inquire whether I should prepare a video of no less than 288 or a video of 384
Closed
4 months ago
Train syncnet use SyncNet_color_384 but train wav2lip use SyncNet_color?
Updated
3 months ago
Comments count
1
What indicator represents the end of training hq_wav2lip_sam_train?
Updated
3 months ago
Comments count
4
train_syncnet_sam.py is not using GPU (RTX 4090)
Updated
3 months ago
Comments count
1
LOSS stuck arount 0.69
Updated
3 months ago
Comments count
7
When I use hq_wav2lip_sam_train.py。
Closed
4 months ago
Comments count
3
Syncnet training? Chilcken and egg?
Updated
4 months ago
Comments count
4
High resolution dataset
Updated
4 months ago
Comments count
1
video clips length
Updated
4 months ago
the input of lpips loss
Closed
5 months ago
Comments count
1
Hi Brother
Updated
5 months ago
Comments count
2
Wav2Lip_SAM freeze_audio_encoder
Closed
5 months ago
Comments count
1
Should this be `Wav2Lip_SAM` or `Wav2Lip_384`?
Updated
5 months ago
老哥,你为嘛把项目名称改来改去?
Closed
5 months ago
Comments count
5
A little question
Closed
6 months ago
Comments count
14
Missing License
Closed
5 months ago
Comments count
3
train_syncnet history,waiting the result
Closed
7 months ago
Comments count
3
During inference, how to remove an additional mask layer on the face?
Updated
6 months ago
Is 288 Sam supported? I changed the image size of syncnet to 288 and trained it, but found that the second step does not support 288
Closed
6 months ago
Help to test the data from open-sourced dataset
Updated
6 months ago
instruction
Updated
6 months ago
RuntimeError: Error(s) in loading state_dict for SyncNet_color:
Closed
6 months ago
Comments count
1
when I runnning the preprocess.py, the process unresponsive
Closed
7 months ago
inference no working
Updated
7 months ago
Is there a pre-trained model?
Updated
7 months ago
Can preprocess.py not be used?
Updated
7 months ago
Comments count
4
why not use LeakyRelu but use Relu in sam.py ?
Closed
7 months ago
Comments count
2
error too many values to unpack
Updated
7 months ago
Comments count
1
Previous
Next