ogawan / nlp_vec

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

nlp_vec

About this repo

This repo has codes for following functions:

  1. Vectorize sentences using standard word embedding such as GloVe and word2vec.
  2. Build word2vec from scratch using negative sampling and skip-gram
  3. Fine-tune large pre-trained language models using scientific data.
  4. Fine-tune BERT model.
  5. Text similarity search engine based on cosine similarity and euclidean distances.

References

The implmentation for using pre-trained word embedding is minor modification from Natural language processing 2 by Lazyprogrammer

The implmentation of word2vec (skip-gram with negative sampling) is minor modification from deep_learning_NLP by Tixierae

Information retrievel using word2vec by Abhishek Sharma.

How to run this app

First, clone this repository and open a terminal inside the folder.

Install pretrained vectors:

word2vec

wget -c "https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz"
gunzip GoogleNews-vectors-negative300.bin.gz 

GloVe

wget -c https://nlp.stanford.edu/data/glove.6B.zip

Install dependencies:

pip install -r requirements.txt

Run the app:

python app.py

About


Languages

Language:Jupyter Notebook 96.8%Language:Python 3.2%