Madhu009 / Deep-math-machine-learning.ai

A blog which talks about machine learning, deep learning algorithms and the Math. and Machine learning algorithms written from scratch.

Home Page:https://medium.com/deep-math-machine-learning-ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Issue in understanding the logic in "Deep-math-machine-learning.ai/NLP/Word2Vec"

GagandeepDulku opened this issue · comments

Hi,
I have a doubt about one part (In 20). After training the model we are expecting line of code
trained_embeddings = embeddings.eval()
Why are we expecting that randomly initiated array will get updated when we are not even using it as an input.
please point out where I am wrong and correct it, as far as I know following code

loss = tf.reduce_mean(tf.nn.nce_loss(nce_weights, nce_biases, Y, embed, num_sampled, voc_size))
# Use the adam optimizer
optimizer = tf.train.AdamOptimizer(1e-1).minimize(loss)

it will update the nce_weights with each iteration not embedding matrix and why we would think it will get updated since we are not even using it at any input.

Thanks
Gagan

commented

loss = tf.reduce_mean(tf.nn.nce_loss(nce_weights, nce_biases, Y, embed, num_sampled, voc_size))

In this line we are running "emded" tensor also which updates the "embeddings" tensor variable during the training.
observ these two lines
embeddings = tf.Variable(tf.random_uniform([voc_size,embedding_size],-1.0,1.0))
embed = tf.nn.embedding_lookup(embeddings, X)