ML@Rutgers's repositories
llm-continual-learning-survey
Continual Learning of Large Language Models: A Comprehensive Survey
unified-continual-learning
[NeurIPS 2023] A Unified Approach to Domain Incremental Learning with Memory: Theory and Algorithm
multimodal-needle-in-a-haystack
Code and data for the benchmark "Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models"
interpretable-foundation-models
[ICML 2024] Probabilistic Conceptual Explainers (PACE): Trustworthy Conceptual Explanations for Vision Foundation Models
multi-domain-active-learning
[AAAI 2024] Composite Active Learning: Towards Multi-Domain Active Learning with Theoretical Guarantees
variational-imbalanced-regression
[NeurIPS 2023] Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing
ECBM
ICLR 2024: Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Formal-LLM
Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents
train-free-uncertainty
Official Code for "Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate, AAAI 2022"