tate8 / translator

Transformer translator website with multithreaded web server in Rust

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

translator

Description

A English-to-Spanish translation website running on a multithreaded web server.
Check out this notebook for the ML code.

Contributor(s)

  • Tate Larkin

Server

I created a multithreaded web server in Rust from scratch to serve the website. The server interacts with the machine learning model to dynamically process user questions and serve up a response. It uses a thread pool to hold threads which it can allocate to a task. Workers accept the code that needs to be executed and run that code on different threads in parallel, and when they complete their task, they return to the thread pool to accept a new task.

Machine Learning Model

I used a Transformer model as described in the Attention is All you Need paper. This architecture utilizes only attention mechanisms to track relationships and find patterns in data. It uses self-attention to weight the significance of each part of the input data. Inputs are embedded, positionally encoded, and sent through an encoder and decoder with multiple iterations of Multi-Head Attention layers which runs through attention mechanisms multiple times in parallel. It then uses a final point-wise network for its predictions.

Scaled Dot Product Attention and Multi-Head Attention diagrams

ScaledDotProductAttn


Full Model

Dataset

Tensorflow English-Spanish dataset

Website

For the website design, I mimicked a messaging app such as Apple's Messages and other texting software. I used Mobile First design tactics to ensure great quality and responsiveness on any screen size.

About

Transformer translator website with multithreaded web server in Rust


Languages

Language:Rust 44.2%Language:CSS 26.0%Language:HTML 17.9%Language:JavaScript 8.6%Language:Python 3.3%