There are 0 repository under distilbert-model topic.
This repository contains a DistilBERT model fine-tuned using the Hugging Face Transformers library on the IMDb movie review dataset. The model is trained for sentiment analysis, enabling the determination of sentiment polarity (positive or negative) within text reviews.
This paper describes Humor Analysis using Ensembles of Simple Transformers, the winning submission at the Humor Analysis based on Human Annotation (HAHA) task at IberLEF 2021.
The official repository for the PSYCHIC model
This repository contains my work on the prevention and anonymization of dox content on Twitter. It contains python code and demo of the proposed solution.
This project classifies Internet Hinglish memes using multimodal learning. It combines text and image analysis to categorize memes by sentiment and emotion, leveraging the Memotion 3.0 dataset.
Sentiment analysis using the distilbert-base-uncased model using the movies dataset.
This app searches reddit posts and comments to determine if a product or service has a positive or negative sentiment and predicts top product mentions using Named Entity Recognition
Analyzes emotions in text chunks per chapter using a sentiment analysis model, visualizing scores across chunks as line graphs. Includes pie charts showing dominant emotions per chapter, enhancing understanding of emotional variations in text chunks. Developed using Transformers library.
Using BERT models to perform sentiment analysis on women's clothing
Deep learning for Natural Language Processing
Multiclass classification on tweets about the coronavirus
Fine tuning pre-trained transformer models in TensorFlow and in PyTorch for question answering
Finetune the Transformer model 'DistilBERT' with PyTorch framework . Then inference on a dataset by using this fine-tuned model with the help of Pipeline.
Performing named entity extraction task using Huggingface Transformers
We explored recent studies in Question Answering System. Then tried out 3 different QA models(BERT and DistilBERT) for the sake of learning.
Developing a feedback theory-informed natural language processing (NLP) model to enable large-scale evaluation of written feedback, and analysing a large set of feedback extracted from Moodle using this model to understand the presence of student-centred feedback elements, the commonality and differences in feedback provision across disciplines.
This project involves analyzing and classifying the BoolQ dataset from the SuperGLUE benchmark. We implemented various classifiers and techniques, including rules-based logic, BERT, RNN, and GPT-3/4 data augmentation, achieving performance improvements.
This project is designed to streamline the recruitment process by providing a job and resume matching system and a chatbot for applicants. The key functionalities include: Job and Resume Matching and LLM powered chatbot
Advanced NLP with Contextual Question Answering: This notebook extracts, cleans, and processes text data from multiple files. It utilizes transformer models for contextual question answering and sentence generation. Perfect for exploring cutting-edge NLP techniques and comparing transformer model performances.
This repository contains about my personal learning project about sentiment analysis using Simple Transformers BERT model
LLM transformer Classifier with DistilBERT model
Finetuning the Bert-based LLM to predict whether the tweet is toxic or not
Fine tune bert on a question answering dataset that is further finetuned on finance data to answer questions posed by senior leadership
Sentiment Analysis of movie reviews
Classification, ADSA and Text Summarisation based project for BridgeI2I Task at Inter IIT 2021 Competition. Silver Medalists.
Implemented pre-trained Transformer-based distilBERT and BERT multilingual model to classify sentiments in positive or negative class and ranked them on scale of 1 to 5
This is a production ready DistilBERT Sentiment Analysis model for product reviews designed to work as a low cost market research tool with the nuiance of an actual market researcher.
This is a production ready DistilBERT Sentiment Analysis model for service reviews designed to work as a low cost market research tool with the nuiance of an actual market researcher.
Successfully fine-tuned a pretrained DistilBERT transformer model that can classify social media text data into one of 4 cyberbullying labels i.e. ethnicity/race, gender/sexual, religion and not cyberbullying with a remarkable accuracy of 99%.
Successfully developed a fine-tuned DistilBERT transformer model which can accurately predict the overall sentiment of a piece of financial news up to an accuracy of nearly 81.5%.
🗨️ This repository contains a collection of notebooks and resources for various NLP tasks using different architectures and frameworks.
This project analyzes and compares the Wikipedia articles of Xi Jinping and Vladimir Putin over 20 years, uncovering differences in portrayal, sentiment, and biases to measure public perception of each leader.