Andrew Chan (LFhase)

LFhase

Geek Repo

Company:Someplace

Location:Somewhere

Home Page:https://lfhase.win

Github PK Tool:Github PK Tool

Andrew Chan's starred repositories

tree-of-thought-llm

[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models

Language:PythonLicense:MITStargazers:4303Issues:120Issues:52

RL4LMs

A modular RL library to fine-tune language models to human preferences

Language:PythonLicense:Apache-2.0Stargazers:2117Issues:26Issues:54

chatgpt-comparison-detection

Human ChatGPT Comparison Corpus (HC3), Detectors, and more! 🔥

JetMoE

Reaching LLaMA2 Performance with 0.1M Dollars

Language:PythonLicense:Apache-2.0Stargazers:935Issues:8Issues:8

prometheus-eval

Evaluate your LLM's response with Prometheus and GPT4 💯

Language:PythonLicense:Apache-2.0Stargazers:617Issues:3Issues:17

RLHF-Reward-Modeling

Recipes to train reward model for RLHF.

Language:PythonLicense:Apache-2.0Stargazers:292Issues:6Issues:10

language-model-arithmetic

Controlled Text Generation via Language Model Arithmetic

Language:PythonLicense:MITStargazers:176Issues:8Issues:6

MegaMolBART

A deep learning model for small molecule drug discovery and cheminformatics based on SMILES

ect

Consistency Models Made Easy

ReMax

Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)

MolGen

[ICLR 2024] Domain-Agnostic Molecular Generation with Chemical Feedback

Language:PythonLicense:MITStargazers:114Issues:7Issues:12

proxy-tuning

Code associated with Tuning Language Models by Proxy (Liu et al., 2024)

LLM4Chem

Official code repo for the paper "LlaSMol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset"

Language:PythonLicense:MITStargazers:44Issues:6Issues:1

easy-to-hard-generalization

Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"

Language:PythonLicense:Apache-2.0Stargazers:43Issues:5Issues:0

benbench

Benchmarking Benchmark Leakage in Large Language Models

SimSGT

[NeurIPS 2023] "Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules"

Language:PythonStargazers:25Issues:0Issues:0

MemoryMosaics

Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.

Language:PythonLicense:Apache-2.0Stargazers:24Issues:0Issues:0

ReaLMistake

This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".

Language:PythonLicense:NOASSERTIONStargazers:17Issues:0Issues:0

LPM-24-Dataset

This repository contains information on the creation, evaluation, and benchmark models for the L+M-24 Dataset. L+M-24 will be featured as the shared task at The Language + Molecules Workshop at ACL 2024.

Language:PythonStargazers:16Issues:0Issues:0
Language:PythonLicense:CC-BY-4.0Stargazers:14Issues:0Issues:0

CoT_Causal_Analysis

Repository of paper "LLMs with Chain-of-Thought Are Non-Causal Reasoners"

Language:PythonStargazers:12Issues:0Issues:0

PairS

Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; arXiv preprint arXiv:2403.16950)

Language:PythonLicense:MITStargazers:9Issues:6Issues:0

LM_random_walk

Official code for paper Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation

Language:PythonLicense:MITStargazers:8Issues:1Issues:0

SCARCE

[ICML 2024] Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical

GOODHSE

[CVPR 2024] Improving out-of-distribution generalization in graphs via hierarchical semantic environments

Language:PythonStargazers:3Issues:0Issues:0

GMT

[ICML 2024] How Interpretable Are Interpretable Graph Neural Networks?

Language:PythonLicense:MITStargazers:2Issues:0Issues:0
Language:PythonStargazers:1Issues:0Issues:0