Breakend / LLM-Tuning-Safety.github.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

This is the project page of the paper: Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

About


Languages

Language:CSS 70.8%Language:JavaScript 18.3%Language:HTML 10.9%