Pranav.N.Venkit (PranavNV)

PranavNV

Geek Repo

Company:Doctorate Student

Home Page:https://www.pranavkit.com/

Github PK Tool:Github PK Tool

Pranav.N.Venkit's repositories

Nationality-Prejudice-in-Text-Generation

This project focuses on the analysis of text generation models such as GPT-2 to identify and understand populistic behaviors or biases against various nationality.

BITS

Bias Identification Test in Sentiments (BITS) consists of 2,896 sentences curated to probe sentiment analysis and toxicity analysis models for biases in sociodemographic factors like disability, race and gender.

Stargazers:1Issues:0Issues:0

PretzelXD

This project is intended to implement the SemEval 2018 Task 2.

Language:PythonStargazers:1Issues:3Issues:0
Language:JavaScriptStargazers:0Issues:2Issues:0

Biblioteca

This is a multimodal enterprise search engine project that is intended to search through sentiment frames present in the given collection of books.

Stargazers:0Issues:2Issues:0

ExData_Plotting1

Plotting Assignment 1 for Exploratory Data Analysis

Language:RStargazers:0Issues:0Issues:0
Language:RStargazers:0Issues:0Issues:0
Language:HTMLStargazers:0Issues:0Issues:0

hellobot2433

ChatBot example

Language:JavaScriptStargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

lang-of-pol

Code for the NIH-funded "Primed to (re)act" project focused on processing and analyzing broadcast police communications.

License:MPL-2.0Stargazers:0Issues:0Issues:0

The-Sentiment-Problem

We conduct an inquiry into the sociotechnical aspects of sentiment analysis (SA) by critically examining 189 peer-reviewed papers on its applications, models, and dataset.

Stargazers:0Issues:0Issues:0
Language:JavaScriptStargazers:0Issues:2Issues:0

Toxic_Comments

Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions. Platforms struggle to effectively facilitate conversations, leading many communities to limit or completely shut down user comments. The Conversation AI team, a research initiative founded by Jigsaw and Google (both a part of Alphabet) are working on tools to help improve online conversation. One area of focus is the study of negative online behaviors, like toxic comments (i.e. comments that are rude, disrespectful or otherwise likely to make someone leave a discussion). So far they’ve built a range of publicly available models served through the Perspective API, including toxicity. But the current models still make errors, and they don’t allow users to select which types of toxicity they’re interested in finding (e.g. some platforms may be fine with profanity, but not with other types of toxic content).

Language:PythonStargazers:0Issues:0Issues:0