Cola Chan (141forever)

141forever

Geek Repo

Company:East China Normal University

Location:Shanghai

Home Page:https://141forever.github.io

Github PK Tool:Github PK Tool

Cola Chan's starred repositories

langchain

🦜🔗 Build context-aware reasoning applications

Language:Jupyter NotebookLicense:MITStargazers:90778Issues:679Issues:7408

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Language:PythonLicense:Apache-2.0Stargazers:24895Issues:219Issues:3998

mamba

Mamba SSM architecture

Language:PythonLicense:Apache-2.0Stargazers:12154Issues:99Issues:470

RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

Language:PythonLicense:Apache-2.0Stargazers:12118Issues:134Issues:198

DeepSpeedExamples

Example models using DeepSpeed

Language:PythonLicense:Apache-2.0Stargazers:5950Issues:75Issues:529

alignment-handbook

Robust recipes to align language models with human and AI preferences

Language:PythonLicense:Apache-2.0Stargazers:4355Issues:110Issues:131

awesome-RLHF

A curated list of reinforcement learning with human feedback resources (continually updated)

llm_interview_note

主要记录大语言大模型(LLMs) 算法(应用)工程师相关的知识及面试题

Awesome-LLM-KG

Awesome papers about unifying LLMs and KGs

llm-hallucination-survey

Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"

multiwoz

Source code for end-to-end dialogue model from the MultiWOZ paper (Budzianowski et al. 2018, EMNLP)

Language:PythonLicense:MITStargazers:842Issues:17Issues:62

TruthfulQA

TruthfulQA: Measuring How Models Imitate Human Falsehoods

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:561Issues:8Issues:10

HaluEval

This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.

Language:PythonLicense:MITStargazers:362Issues:9Issues:11

MuTual

A Dataset for Multi-Turn Dialogue Reasoning

Multi-view-Consistency-for-MWP

EMNLP22: Multi-View Reasoning: Consistent Contrastive Learning for Math Word Problem

Language:PythonLicense:MITStargazers:231Issues:2Issues:1

finetuned-qlora-falcon7b-medical

Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset

Language:Jupyter NotebookLicense:MITStargazers:229Issues:4Issues:2

Emotional-Support-Conversation

Data and codes for ACL 2021 paper: Towards Emotional Support Dialog Systems

Language:PythonLicense:NOASSERTIONStargazers:223Issues:4Issues:36

HaDes

Token-level Reference-free Hallucination Detection

Language:PythonLicense:MITStargazers:90Issues:6Issues:1

FactCHD

[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection

Language:PythonLicense:MITStargazers:75Issues:4Issues:2

mPLUG-HalOwl

mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating

Language:PythonLicense:MITStargazers:67Issues:1Issues:5

llm-uncertainty

code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"

felm

Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models"

Synthetic-Persona-Chat

The Synthetic-Persona-Chat dataset is a synthetically generated persona-based dialogue dataset. It extends the original Persona-Chat dataset.

Language:PythonStargazers:47Issues:5Issues:0

Self-Verification

We have released the code and demo program required for LLM with self-verification

Language:PythonLicense:Apache-2.0Stargazers:43Issues:1Issues:0

factor

Code and data for the FACTOR paper

Language:PythonLicense:MITStargazers:36Issues:4Issues:6

ChatProtect

This is the code for the paper "Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation".

Language:PythonLicense:Apache-2.0Stargazers:31Issues:8Issues:2
Language:PythonStargazers:14Issues:0Issues:0