This study evaluates the performance of a fine-tuned GPT 3.5 model in detecting deceptive opinions. Inspired by Loconte et al. (2023), a dataset of opinions which are labeled as either truthful or deceptive is used, and a fine-tuning process to enhance GPT 3.5’s deceptive opinion detection capabilities is applied. The findings reveal that the fine-tuned GPT 3.5 outperforms the FLAN-T5 model (presented in the aforementioned article) in accuracy, and exhibits the use of statistically significant linguistic features to distinguish between truthful and deceptive statements. This research contributes to the potential applications of large language models in fields requiring deception detection.
LLM, AI, neuroscience
Data set Opinion under Scenario 1 used in Loconte et al. (2023) Methods
- LLM Fine-Tuning: Chat GPT-3.5-turbo-0163-Fine tuning
- Common Language Effect Size (CLES)
- Python
- Report:
Report.pdf
- Presentation:
Presentation.pdf