NeuralTextualInversion / NeTI

Official Implementation for "A Neural Space-Time Representation for Text-to-Image Personalization" (SIGGRAPH Asia 2023)

Home Page:https://neuraltextualinversion.github.io/NeTI/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

optimizer

yja1 opened this issue · comments

commented

in coach.py
optimizer = torch.optim.AdamW(
self.text_encoder.text_model.embeddings.mapper.parameters(), # only optimize the embeddings
lr=self.cfg.optim.learning_rate,
betas=(self.cfg.optim.adam_beta1, self.cfg.optim.adam_beta2),
weight_decay=self.cfg.optim.adam_weight_decay,
eps=self.cfg.optim.adam_epsilon,
)
why only optimize text_encoder.text_model.embeddings.mapper.parameters()?
text_encoder.text_model.embeddings.token_embedding.weight.parameters() why not put int optimizer?【the new add token need optimize】