Andrew Chan (LFhase)

LFhase

Geek Repo

Company:Someplace

Location:Somewhere

Home Page:https://lfhase.win

Github PK Tool:Github PK Tool

Andrew Chan's starred repositories

tree-of-thought-llm

[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models

Language:PythonLicense:MITStargazers:4531Issues:120Issues:54

RL4LMs

A modular RL library to fine-tune language models to human preferences

Language:PythonLicense:Apache-2.0Stargazers:2159Issues:25Issues:54

chatgpt-comparison-detection

Human ChatGPT Comparison Corpus (HC3), Detectors, and more! 🔥

JetMoE

Reaching LLaMA2 Performance with 0.1M Dollars

Language:PythonLicense:Apache-2.0Stargazers:955Issues:8Issues:9

prometheus-eval

Evaluate your LLM's response with Prometheus and GPT4 💯

Language:PythonLicense:Apache-2.0Stargazers:737Issues:3Issues:27

RLHF-Reward-Modeling

Recipes to train reward model for RLHF.

Language:PythonLicense:Apache-2.0Stargazers:586Issues:19Issues:25

language-model-arithmetic

Controlled Text Generation via Language Model Arithmetic

Language:PythonLicense:MITStargazers:194Issues:8Issues:7

ect

Consistency Models Made Easy

MegaMolBART

A deep learning model for small molecule drug discovery and cheminformatics based on SMILES

ReMax

Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)

MolGen

[ICLR 2024] Domain-Agnostic Molecular Generation with Chemical Feedback

Language:PythonLicense:MITStargazers:124Issues:7Issues:13

proxy-tuning

Code associated with Tuning Language Models by Proxy (Liu et al., 2024)

LLM4Chem

Official code repo for the paper "LlaSMol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset"

Language:PythonLicense:MITStargazers:57Issues:7Issues:5

easy-to-hard-generalization

Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"

Language:PythonLicense:Apache-2.0Stargazers:44Issues:6Issues:0

benbench

Benchmarking Benchmark Leakage in Large Language Models

MemoryMosaics

Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.

Language:PythonLicense:Apache-2.0Stargazers:30Issues:5Issues:3

LPM-24-Dataset

This repository contains information on the creation, evaluation, and benchmark models for the L+M-24 Dataset. L+M-24 will be featured as the shared task at The Language + Molecules Workshop at ACL 2024.

Language:PythonStargazers:25Issues:0Issues:0

SimSGT

[NeurIPS 2023] "Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules"

ReaLMistake

This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".

Language:PythonLicense:NOASSERTIONStargazers:21Issues:8Issues:0

PairS

Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; arXiv preprint arXiv:2403.16950)

Language:PythonLicense:MITStargazers:20Issues:6Issues:0
Language:PythonLicense:CC-BY-4.0Stargazers:15Issues:0Issues:0

CoT_Causal_Analysis

Repository of paper "LLMs with Chain-of-Thought Are Non-Causal Reasoners"

Language:PythonStargazers:14Issues:2Issues:0

LM_random_walk

Official code for paper Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation

Language:PythonLicense:MITStargazers:12Issues:1Issues:0

SCARCE

[ICML 2024] Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical

GMT

[ICML 2024] How Interpretable Are Interpretable Graph Neural Networks?

Language:PythonLicense:MITStargazers:5Issues:3Issues:0

GOODHSE

[CVPR 2024] Improving out-of-distribution generalization in graphs via hierarchical semantic environments

Language:PythonStargazers:5Issues:2Issues:0
Language:PythonStargazers:1Issues:0Issues:0