TMElyralab / MuseTalk

MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The weight of L1

chunyu-li opened this issue · comments

The loss function is $L = \lambda L_1 + L_2$. Could you please tell me the value of $\lambda$?

Hi,

This weight is used to balance the value of the two losses during the training process, so you can adjust it according to the actual values. The released model uses a value of 2.

Hi,

This weight is used to balance the value of the two losses during the training process, so you can adjust it according to the actual values. The released model uses a value of 2.

Thank you very much for your answer!

Hi,

This weight is used to balance the value of the two losses during the training process, so you can adjust it according to the actual values. The released model uses a value of 2.

github项目页图片,$\lambda$ 是和latent部分损失项相乘的。而train_codes代码部分则是反过来。是不是写错啦?

# Mask the top half of the image and calculate the loss only for the lower half of the image.
 image_pred_img = image_pred_img[:, :, image_pred_img.shape[2]//2:, :]
image = image[:, :, image.shape[2]//2:, :]    
loss_lip = F.l1_loss(image_pred_img.float(), image.float(), reduction="mean") # the loss of the decoded images
loss_latents = F.l1_loss(image_pred.float(), latents.float(), reduction="mean") # the loss of the latents
loss = 2.0*loss_lip + loss_latents # add some weight to balance the loss

Hi,
This weight is used to balance the value of the two losses during the training process, so you can adjust it according to the actual values. The released model uses a value of 2.

github项目页图片,λ 是和latent部分损失项相乘的。而train_codes代码部分则是反过来。是不是写错啦?

# Mask the top half of the image and calculate the loss only for the lower half of the image.
 image_pred_img = image_pred_img[:, :, image_pred_img.shape[2]//2:, :]
image = image[:, :, image.shape[2]//2:, :]    
loss_lip = F.l1_loss(image_pred_img.float(), image.float(), reduction="mean") # the loss of the decoded images
loss_latents = F.l1_loss(image_pred.float(), latents.float(), reduction="mean") # the loss of the latents
loss = 2.0*loss_lip + loss_latents # add some weight to balance the loss

确实,我也感觉这里是作者写错了

Hi,
This weight is used to balance the value of the two losses during the training process, so you can adjust it according to the actual values. The released model uses a value of 2.

github项目页图片,$\lambda$ 是和latent部分损失项相乘的。而train_codes代码部分则是反过来。是不是写错啦?

# Mask the top half of the image and calculate the loss only for the lower half of the image.
 image_pred_img = image_pred_img[:, :, image_pred_img.shape[2]//2:, :]
image = image[:, :, image.shape[2]//2:, :]    
loss_lip = F.l1_loss(image_pred_img.float(), image.float(), reduction="mean") # the loss of the decoded images
loss_latents = F.l1_loss(image_pred.float(), latents.float(), reduction="mean") # the loss of the latents
loss = 2.0*loss_lip + loss_latents # add some weight to balance the loss

我确认了一下代码,图画错了,代码没错。。。pixel维度的loss权重是2