Sumanth Doddapaneni's repositories
Face-Recognition
Face Recognition based attendance system for classroom environment. Developed a python API which recognizes the people in a picture(of a classroom) and matches them with all the student registrations for that course and returns a image with all the recognized and unrecognized faces (face tags), and tags all the students recognized as present.
image-descriptor
This repository contains the work done as part of Made With ML Incubator. The goal of the project is to generate dense captions for images by learning visual representations from the rich text descriptions.
paper-tracker
Track all seminal papers across domains of NLP and Speech
Traffic-Counter
Automated Traffic census achieved with object detection using tensorflow. Optimised to work on mobile devices.
datasets
🤗 Fast, efficient, open-access datasets and evaluation metrics in PyTorch, TensorFlow, NumPy and Pandas
fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
google-research
Google Research
indicTrans
Scripts for indicTranslate - Machine Translation for Indian languages
MMLMCalibration
Code for EMNLP 2022 Paper: On the Calibration of Massively Multilingual Language Models
place-holder-sumanthd17.github.io
Use this template if you need a quick developer / data science portfolio! Based on a Minimal Jekyll theme for GitHub Pages.
rg-schedule
Minimal Jekyll theme for storytellers
s3prl
Self-Supervised Speech Pre-training and Representation Learning Toolkit.
SAT
Styled Augmented Translation
Spoken-to-Written
Converting spoken English to written English
sumanthd17.github.io
A beautiful Jekyll theme for academics
text-to-text-transfer-transformer
Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
translate-with-llms
Webapp to figure out the right prompts for translation with GPT-X models
XLM
PyTorch original implementation of Cross-lingual Language Model Pretraining.