alecruces / GPT-Truth

nter the realm of truth detection with GPT-Truth - fine-tuning GPT-3.5 for unparalleled accuracy in identifying deceptive opinions

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Truth and AI: The Impact of Fine-Tuning GPT-3.5 on Lie Detection


Description

This study evaluates the performance of a fine-tuned GPT 3.5 model in detecting deceptive opinions. Inspired by Loconte et al. (2023), a dataset of opinions which are labeled as either truthful or deceptive is used, and a fine-tuning process to enhance GPT 3.5’s deceptive opinion detection capabilities is applied. The findings reveal that the fine-tuned GPT 3.5 outperforms the FLAN-T5 model (presented in the aforementioned article) in accuracy, and exhibits the use of statistically significant linguistic features to distinguish between truthful and deceptive statements. This research contributes to the potential applications of large language models in fields requiring deception detection.

LLM2

Keywords

LLM, AI, neuroscience

Data

Data set Opinion under Scenario 1 used in Loconte et al. (2023) Methods

  • LLM Fine-Tuning: Chat GPT-3.5-turbo-0163-Fine tuning
  • Common Language Effect Size (CLES)

Software

  • Python

Files

  • Report: Report.pdf
  • Presentation: Presentation.pdf

About

nter the realm of truth detection with GPT-Truth - fine-tuning GPT-3.5 for unparalleled accuracy in identifying deceptive opinions


Languages

Language:Jupyter Notebook 100.0%