There are 0 repository under distilbert-model topic.
This repository contains a DistilBERT model fine-tuned using the Hugging Face Transformers library on the IMDb movie review dataset. The model is trained for sentiment analysis, enabling the determination of sentiment polarity (positive or negative) within text reviews.
This paper describes Humor Analysis using Ensembles of Simple Transformers, the winning submission at the Humor Analysis based on Human Annotation (HAHA) task at IberLEF 2021.
The official repository for the PSYCHIC model
Multiclass classification on tweets about the coronavirus
This repository contains my work on the prevention and anonymization of dox content on Twitter. It contains python code and demo of the proposed solution.
This app searches reddit posts and comments to determine if a product or service has a positive or negative sentiment and predicts top product mentions using Named Entity Recognition
Fine tuning pre-trained transformer models in TensorFlow and in PyTorch for question answering
Finetune the Transformer model 'DistilBERT' with PyTorch framework . Then inference on a dataset by using this fine-tuned model with the help of Pipeline.
Using BERT models to perform sentiment analysis on women's clothing
Performing named entity extraction task using Huggingface Transformers
We explored recent studies in Question Answering System. Then tried out 3 different QA models(BERT and DistilBERT) for the sake of learning.
Developing a feedback theory-informed natural language processing (NLP) model to enable large-scale evaluation of written feedback, and analysing a large set of feedback extracted from Moodle using this model to understand the presence of student-centred feedback elements, the commonality and differences in feedback provision across disciplines.
Advanced NLP with Contextual Question Answering: This notebook extracts, cleans, and processes text data from multiple files. It utilizes transformer models for contextual question answering and sentence generation. Perfect for exploring cutting-edge NLP techniques and comparing transformer model performances.
Sentiment analysis on products reviews with Vader and Distilbert
The data and code for my master's thesis for the MA Digital Text Analysis at the University of Antwerp
Thesis Project
Finetuning DistilBERT on IMDB Dataset
Using Bert and Distilbert to fill in the gaps
Distilbert and LSTM can identify hate speech in various text sequences. In our project, we combined datasets to evaluate their performance on validation set and achieved 93% accuracy with Distilbert and 94% with LSTM.
Deep learning for Natural Language Processing
Aim to build a question- answering product that can that can understand the information in these articles and answer some simple questions related to those articles.
Positive/negative sentiment model on cleaned text data using Distilbert NLP pre-trained model from Hugging Face
Classification, ADSA and Text Summarisation based project for BridgeI2I Task at Inter IIT 2021 Competition. Silver Medalists.
This project classifies Internet Hinglish memes using multimodal learning. It combines text and image analysis to categorize memes by sentiment and emotion, leveraging the Memotion 3.0 dataset.
This project involves analyzing and classifying the BoolQ dataset from the SuperGLUE benchmark. We implemented various classifiers and techniques, including rules-based logic, BERT, RNN, and GPT-3/4 data augmentation, achieving performance improvements.
Implemented pre-trained Transformer-based distilBERT and BERT multilingual model to classify sentiments in positive or negative class and ranked them on scale of 1 to 5
This project centers on elevating customer satisfaction by conducting sentiment analysis on customer feedback for an online classes and video conferencing app. The aim is to decipher customer sentiments in their feedback, extract insights, and improve user experience while addressing any concerns.
This project is designed to streamline the recruitment process by providing a job and resume matching system and a chatbot for applicants. The key functionalities include: Job and Resume Matching and LLM powered chatbot
News article Similarity
Successfully fine-tuned a pretrained DistilBERT transformer model that can classify social media text data into one of 4 cyberbullying labels i.e. ethnicity/race, gender/sexual, religion and not cyberbullying with a remarkable accuracy of 99%.
Successfully developed a fine-tuned DistilBERT transformer model which can accurately predict the overall sentiment of a piece of financial news up to an accuracy of nearly 81.5%.
🗨️ This repository contains a collection of notebooks and resources for various NLP tasks using different architectures and frameworks.