MLCommons's repositories
ck
Collective Mind (CM) is a small, modular, cross-platform and decentralized workflow automation framework with a human-friendly interface and reusable automation recipes to make it easier to build, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data, software and hardware
algorithmic-efficiency
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
training_policies
Issues related to MLPerf™ training policies, including rules and suggested changes
mobile_app_open
Mobile App Open
modelbench
Run safety benchmarks against AI models and view detailed reports showing how well they performed.
mobile_models
MLPerf™ Mobile models
modelgauge
Make it easy to automatically and uniformly measure the behavior of many AI Systems.
inference_results_v4.0
This repository contains the results and code for the MLPerf™ Inference v4.0 benchmark.
mobile_open
MLPerf Mobile benchmarks
cm4mlops
A collection of reusable and cross-platform automation recipes (CM scripts) with a human-friendly interface and minimal dependencies to make it easier to build, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data sets, software and hardware (cloud/edge)
cm4mlperf-results
CM interface and automation recipes to analyze MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$
hpc_results_v3.0
This repository contains the results and code for the MLPerf™ HPC Training v3.0 benchmark.
tiny_results_v1.2
This repository contains the results and code for the MLPerf™ Tiny Inference v1.2 benchmark.