There are 3 repositories under bleu-score topic.
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
A modular library built on top of Keras and TensorFlow to generate a caption in natural language for any input image.
The LSTM model generates captions for the input images after extracting features from pre-trained VGG-16 model. (Computer Vision, NLP, Deep Learning, Python)
A visual and interactive scoring environment for machine translation systems.
Deep CNN-LSTM for Generating Image Descriptions :smiling_imp:
To evaluate machine translation, they use several methods, some of which we fully implemented
Scripts for an upcoming blog "Extractive vs. Abstractive Summarization" for RaRe Technologies.
State of the art of Neural Machine Translation with PyTorch and TorchText.
In this project, I define and train an image-to-caption model that can produce descriptions for real world images with Flickr-8k dataset.
Machine learning tools for NLP programming.
⚡ Seq2Seq model combines Attention mechanism
Repository containing the code to my bachelor thesis about Neural Machine Translation
Generate caption on images using CNN Encoder- LSTM Decoder structure
Tensorflow implementation of "Show and Tell"
Generate captions from images
Using Google Colab, we develop a NMT, language translator. Here, we do NMT to translate from English to Vietnamese.
A CNN-LSTM model to generate a sentence/caption that describes the contents/scene of an image.
Modern Eager TensorFlow implementation of Attention Is All You Need
In this project, we use a Deep Recurrent Architecture, which uses CNN (VGG-16 Net) pretrained on ImageNet to extract 4096-Dimensional image feature Vector and an LSTM which generates a caption from these feature vectors.
Image Captioning using Deep learning models in Keras.
A benchmark of ChatGPT and some of its challengers on summarization task
A model inspired from the famous Show and Tell Model is implemented for automatic image captioning.
The work presented was developed during the internship, as researchers in the field of Natural Language Generation, at the Insid&s Lab laboratory in Milan-Bicocca. The work carried out deals with the creation of a framework for the correct assessment of the impact of the quality of the input datasets on the quality of the text generated by the NLG models, specifically: Creation of the "Concept-Based" and "Entity-Based" versions of the WebNLG dataset; Evaluation of the quality of the datasets created; Training of LSTM and Transformer models using the OpenNMT tool; Natural language text generation by LSTM and Transformer models; Evaluation of the quality of the text generated by the NLG models; Final analysis.
This project aims to assist visually impaired individuals by providing a solution to convert images into spoken language. Leveraging deep learning and natural language processing, the system processes images, generates descriptive captions, and converts these captions into audio output.
Natural Language Processing (classification and machine translation) codes and analysis done for the year long practicum in Dublin City University (2019-20)
LSTM (RNN) Implementation for Image Captioning on VizWiz-Captions dataset
Image caption generator is a task that involves computer vision and natural language processing concepts to recognize the context of an image and describe them in a natural language like English.
A PyTorch implementation of Transformers from scratch for Machine Translation based on "Attention Is All You Need" by Ashish Vaswani et. al.
PyTorch implementation of "Attention Is All You Need" by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin