udaylunawat / Word2vec

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

word_vectors

Project done by Anshul and Uday #Project Description Using Word2Vec to explore semantic similarities

Word2vec is a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural networks that are trained to reconstruct linguistic contexts of words. Word2vec takes as its input a large corpus of text and produces a vector space, typically of several hundred dimensions, with each unique word in the corpus being assigned a corresponding vector in the space. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located in close proximity to one another in the space.

Word2vec was created by a team of researchers led by Tomas Mikolov at Google. The algorithm has been subsequently analysed and explained by other researchers.Embedding vectors created using the Word2vec algorithm have many advantages compared to earlier algorithms like Latent Semantic Analysis.

##Dependencies

run pip install -r pip-requirements.txt to install the necessary dependencies.

##Usage

The testing.ipynb has my commented code. The thrones2vec.ipynb has the code with output.

About


Languages

Language:Jupyter Notebook 99.7%Language:Python 0.3%