purseclab / LLM_Security_Privacy_Advice

Artifacts for ACSAC Paper: Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Artifacts for ACSAC 2023 Paper "Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions"\

About

Artifacts for ACSAC Paper: Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions

License:MIT License


Languages

Language:Python 95.5%Language:Dockerfile 4.5%