eschaffn / Continuous-Representation-Experiment

In this project I train and evaluate three different methods for next-word prediction using LSTMs and continuous value inputs and outputs. With what I call sequence to token (S2T), an input sequence is encoded as float values (embeddings) and used to predict a final masked token in the sequence.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Continuous-Representation-Experiment

In this project I train and evaluate three different methods for next-word prediction using LSTMs and continuous value inputs and outputs. With what I call sequence to token (S2T), an input sequence is encoded as float values (embeddings) and used to predict a final masked token in the sequence.

About

In this project I train and evaluate three different methods for next-word prediction using LSTMs and continuous value inputs and outputs. With what I call sequence to token (S2T), an input sequence is encoded as float values (embeddings) and used to predict a final masked token in the sequence.