There are 1 repository under wordembeddings topic.
Extremely simple and fast word2vec implementation with Negative Sampling + Sub-sampling
Implementing Facebook's FastText with java
Using pre trained word embeddings (Fasttext, Word2Vec)
Web服务:使用腾讯 800 万词向量模型和 spotify annoy 引擎得到相似关键词
Cross-Lingual Alignment of Contextual Word Embeddings
Storage and retrieval of Word Embeddings in various databases
PyTorch implementation of the Word2Vec (Skip-Gram Model) and visualizing the trained embeddings using TSNE
This repository contains source code to binarize any real-value word embeddings into binary vectors.
Dutch word embeddings, trained on a large collection of Dutch social media messages and news/blog/forum posts.
Aspect-Based Sentiment Analysis
Extending conceptual thinking with semantic embeddings.
Repository for the experiments described in the paper named "DeepSentiPers: Novel Deep Learning Models Trained Over Proposed Augmented Persian Sentiment Corpus"
TwitPersonality: Computing Personality Traits from Tweets using Word Embeddings and Supervised Learning
flairR: Bring Amazing Flair NLP to R
:book: :books: :newspaper: Workshop that demonstrates using and analyzing text in R.
Improving Word Embeddings by combining word embeddings with their POS (Part Of Speech) tag.
This project aims to implement the Transformer Encoder blocks using various Positional Encoding methods.
Diachronic Word Embedding Model based on Word2vec Skip-gram with Chebyshev approximation
A Persian Word2Vec Model trained by Wikipedia articles
Twitter Text Sentiment Analysis (Preprocessing using Spacy)
Visualize word2vec in javascript
Temporal analysis of legal texts via topic modelling and temporal word embeddings
Opinion Extraction based on Amazon Reviews
The repository contains code to replicate the experiments in the paper "Robustness and Reliability of Gender Bias Assessment in Word Embeddings: The Role of Base Pairs", by Haiyang Zhang, Alison Sneyd and Mark Stevenson, AACL 2020.
Icelandic Word Embeddings. Here you can find pre-trained corpora of word embeddings. Current methods: CBOW, Skip-Gram, Fast-Text (from Gensim library). The .model file are available for download.
Evaluation of polish word embeddings prepared by various research groups. Evaluation is done by words analogy task
Word Embedding visualization with T-SNE (t-distributed stochastic neighbor embedding) for BERT, ALBERT, ELMO, ELECTRA, XLNET, GLOVE.
Review classification and recommendation system based on random forest and data analysis of amazon food review dataset
https://tchanda90.github.io/covid19-textmining/
We use reinforcement learning to study how language can be used as a tool for agents to accomplish tasks in their environment, and show that structure in the evolved language emerges naturally through iterated learning, leading to the development of compositional language for describing and generalising about unseen objects.
A package for embeddings processing