Guangneng Hu's repositories
njuhugn.github.io
Guangneng Hu, Assoc. Prof. @ Xidian Univ, PhD at HKUST, BA/MS at Nanjing Univ.
LLMsPracticalGuide
A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)
AdaVQA
Implementation for our IJCAI-21 paper --- AdaVQA: Overcoming Language Priors with Adapted Margin Loss.
ALLaVA
Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model
CLIP_prefix_caption
Simple image captioning model
DAC
Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models
etr-nlp-mtl
[CVPR '23] Towards Mitigating Task Interference in Multi-Task Learning via Explicit Task Routing with Non-Learnable Primitives
IJCAI-23-PFedRec
Code for ijcai-23 paper "Dual Personalization on Federated Recommendation"
insightface
State-of-the-art 2D and 3D Face Analysis Project
llama
Inference code for LLaMA models
LLMRec
[WSDM'2024 Oral] "LLMRec: Large Language Models with Graph Augmentation for Recommendation"
NineRec
Multimodal Dataset and Benchmark for Multi-domain and Cross-domain Recommendation System
nlxgpt
NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)
RecFormer
Replication of the paper "Text Is All You Need: Learning Language Representations for Sequential Recommendation" on KDD'23.
rex
Official Repository for CVPR 2022 paper "REX: Reasoning-aware and Grounded Explanation"
RLMRec
[WWW'2024] "RLMRec: Representation Learning with Large Language Models for Recommendation"
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
TASTE
[CIKM 2023] This is the code repo for our CIKM‘23 paper "Text Matching Improves Sequential Recommendation by Reducing Popularity Biases".
TITAN-evaluation-master
titan-ijcai2023
TransGTR
Open-source code of TransGTR.
TVQA
[EMNLP 2018] PyTorch code for TVQA: Localized, Compositional Video Question Answering
VQACL
VQACL: A Novel Visual Question Answering Continual Learning Setting (CVPR'23)
wanda
A simple and effective LLM pruning approach.