JudeLee (Dongyub Lee)'s repositories
HMNet-End-to-End-Abstractive-Summarization-for-Meetings
"End-to-End Abstractive Summarization for Meetings" paper - Unofficial PyTorch Implementation
alexa-with-dstc10-track2-dataset
DSTC10 Track 2 - Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations
attention_with_linear_biases
Code for the ALiBi method for transformer language models
awesome-scene-understanding
😎 A list of papers for scene understanding in computer vision.
CLIP4Cir
CLIP for Composed image retrieval training code
debug-mistakes-cce
Meaningfully debugging model mistakes with conceptual counterfactual explanations. ICML 2022
decision-transformer
Official codebase for Decision Transformer: Reinforcement Learning via Sequence Modeling.
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
fuzzywuzzy
Fuzzy String Matching in Python
fvcore
Collection of common code that's shared among different research projects in FAIR computer vision team.
GLM
GLM (General Language Model)
guidance
A guidance language for controlling large language models.
Instruction-Tuning-Papers
Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
Knowldege-Grounded-Conversation
A Knowldege Grounded Conversation (KGC) Paper Reading List Maintained by Shandong University
LAVIS
LAVIS - A One-stop Library for Language-Vision Intelligence
longeval-summarization
Official repository for our EACL 2023 paper "LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization" (https://arxiv.org/abs/2301.13298).
PaLM-rlhf-pytorch
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
SimpleReDial-v1
The sources codes of the DR-BERT model and baselines
summarize-from-feedback
Code for "Learning to summarize from human feedback"
task_vectors
Editing Models with Task Arithmetic
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs