Andrew Chan's starred repositories
tree-of-thought-llm
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
chatgpt-comparison-detection
Human ChatGPT Comparison Corpus (HC3), Detectors, and more! 🔥
prometheus-eval
Evaluate your LLM's response with Prometheus and GPT4 💯
RLHF-Reward-Modeling
Recipes to train reward model for RLHF.
language-model-arithmetic
Controlled Text Generation via Language Model Arithmetic
MegaMolBART
A deep learning model for small molecule drug discovery and cheminformatics based on SMILES
proxy-tuning
Code associated with Tuning Language Models by Proxy (Liu et al., 2024)
easy-to-hard-generalization
Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"
MemoryMosaics
Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.
ReaLMistake
This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".
LPM-24-Dataset
This repository contains information on the creation, evaluation, and benchmark models for the L+M-24 Dataset. L+M-24 will be featured as the shared task at The Language + Molecules Workshop at ACL 2024.
CoT_Causal_Analysis
Repository of paper "LLMs with Chain-of-Thought Are Non-Causal Reasoners"
LM_random_walk
Official code for paper Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation