YanSte / NLP-LLM-Society-Bias-Toxicity

Natural Language Processing (NLP) and Large Language Models (LLM) and LLM with Society, Bias and toxicity

Home Page:https://www.kaggle.com/code/yannicksteph/nlp-llm-society-bias-toxicity

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

| NLP | LLM | Society | Bias Toxicity |

Natural Language Processing (NLP) and Large Language Models (LLM), LLM with Society, Bias and toxicity

Learning

| Overview

This notebook covers various aspects related to language models, with a focus on societal implications. Here's an overview of the main sections:

Learning Objectives

In this section, learning objectives are outlined, emphasizing the following points:

  1. Understanding representation bias in training data.
  2. Using Hugging Face to calculate toxicity scores.
  3. Using SHAP to generate explanations for model output.
  4. Exploring the latest research advancements in model explanation: contrastive explanation.

About

Natural Language Processing (NLP) and Large Language Models (LLM) and LLM with Society, Bias and toxicity

https://www.kaggle.com/code/yannicksteph/nlp-llm-society-bias-toxicity


Languages

Language:Jupyter Notebook 100.0%