Zhuang Chen's repositories
BIG-bench
Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models
CauAIN
Code for IJCAI 2022 accepted paper titled "CauAIN: Causal Aware Interaction Network for Emotion Recognition in Conversations"
chatbot-ui
An open source ChatGPT UI.
ChatGLM-Efficient-Tuning
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
chatgpt-vercel
Elegant and Powerfull. Powered by OpenAI and Vercel.
CoMPM
Context Modeling with Speaker's Pre-trained Memory Tracking for Emotion Recognition in Conversation (NAACL 2022)
CSENet
Csenet: Complex Squeeze-and-Excitation Network for Speech Depression Level Prediction (ICASSP 2022)
CSrankings
A web app for ranking computer science departments according to their research output in selective venues, and for finding active faculty across a wide range of areas.
Deep-learning-books
Books for machine learning, deep learning, math, NLP, CV, RL, etc
Emotional-Support-Conversation
Data and codes for ACL 2021 paper: Towards Emotional Support Dialog Systems
GLM-130B
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Harry-Potter-Dialogue-Dataset
This the repository of Harry Potter Dialogue Dataset.
ICASSP2022-Depression
Automatic Depression Detection: a GRU/ BiLSTM-based Model and An Emotional Audio-Textual Corpus
LLaMA-Factory
Unify Efficient Fine-Tuning of 100+ LLMs
MMSA
MMSA is a unified framework for Multimodal Sentiment Analysis.
MMSA-FET
A Tool for extracting multimodal features from videos.
multimodal-deep-learning
This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as multimodal sentiment analysis.
Multimodal-Infomax
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.
P-tuning
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
P-tuning-v2
An optimized prompt tuning strategy comparable to fine-tuning across model scales and tasks.
self-instruct
Aligning pretrained language models with instruction data generated by themselves.
Self-MM
Codes for paper "Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis"
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
zhchen18.github.io
Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes