Hao Zhao's starred repositories
long-context-icl
Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"
Toolkit-for-Prompt-Compression
Toolkit for Prompt Compression
icl-alignment
Is In-Context Learning Sufficient for Instruction Following in LLMs?
Long-Context-Data-Engineering
Implementation of paper Data Engineering for Scaling Language Models to 128K Context
dl-visualization
This is the source code for the animations in the series "Visualizing Deep Learning"
llm-adaptive-attacks
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]
jailbreakbench
An Open Robustness Benchmark for Jailbreaking Language Models [arXiv 2024]
long-is-more-for-alignment
Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]
emoji-cheat-sheet
A markdown version emoji cheat sheet
custom-brand-icons
Custom brand icons for Home Assistant
arxiv-latex-cleaner
arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv
Reflection_Tuning
[ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
adversarial-random-search-gpt4
Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]
GPT-Fathom
GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well as OpenAI's earlier models on 20+ curated benchmarks under aligned settings.