HITsz-TMG's repositories
UMOE-Scaling-Unified-Multimodal-LLMs
The codes about "Uni-MoE: Scaling Unified Multimodal Models with Mixture of Experts"
awesome-llm-attributions
A Survey of Attributions for Large Language Models
Prompt-BioEL
Code and data of AAAI 2023 paper "Improving Biomedical Entity Linking with Cross-Entity Interaction".
Multi-agent-peer-review
Official implementation of our ArXiv paper "Towards Reasoning in Large Language Models via Multi-Agent Peer Review Collaboration".
VisionGraph
The benchmark and datasets of the ICML 2024 paper "VisionGraph: Leveraging Large Multimodal Models for Graph Theory Problems in Visual Context"
ExplainCPE
ExplainCPE: A Free-text Explanation Benchmark of Chinese Pharmacist Examination
Multimodal-In-Context-Tuning
The codes and datasets about LREC-COLING 2024: A Multimodal In-context Tuning Approach for E-Commerce Product Description Generation
Read-and-Select
Code of "A Read-and-Select Framework for Zero-shot Entity Linking" (EMNLP 2023 Findings).
Sparse-Retrieval-Fewshot-EL
Code of EMNLP 2023 paper "Revisiting Sparse Retrieval for Few-shot Entity Linking".
Cognitive-Visual-Language-Mapper
The codes and datasets about our ACL 2024 Main Conference paper titled "Cognitive Visual-Language Mapper: Advancing Multimodal Comprehension with Enhanced Visual Knowledge Alignment"