MME Benchmarks (MME-Benchmarks)

MME Benchmarks

MME-Benchmarks

Organization data from Github https://github.com/MME-Benchmarks

Multimodal LLM Benchmarks of MME series

Home Page:https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models

GitHub:@MME-Benchmarks

MME Benchmarks's repositories

MME-RealWorld

✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?

Language:PythonStargazers:137Issues:0Issues:0

Video-MME

✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis

Stargazers:678Issues:0Issues:0

MME-CoT

MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency

Language:PythonStargazers:133Issues:0Issues:0

MME-Unify

MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models

Language:PythonStargazers:41Issues:0Issues:0