MLCommons's repositories
ck
Collective Mind (CM) is a simple, modular, cross-platform and decentralized workflow automation framework with a human-friendly interface and reusable automation recipes to make it easier to compose, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data, software and hardware
algorithmic-efficiency
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
training_policies
Issues related to MLPerf™ training policies, including rules and suggested changes
inference_policies
Issues related to MLPerf™ Inference policies, including rules and suggested changes
training_results_v1.0
This repository contains the results and code for the MLPerf™ Training v1.0 benchmark.
mobile_app_open
Mobile App Open
training_results_v2.0
This repository contains the results and code for the MLPerf™ Training v2.0 benchmark.
training_results_v3.1
This repository contains the results and code for the MLPerf™ Training v3.1 benchmark.
inference_results_v1.1
This repository contains the results and code for the MLPerf™ Inference v1.1 benchmark.
inference_results_v4.0
This repository contains the results and code for the MLPerf™ Inference v4.0 benchmark.
hpc_results_v2.0
This repository contains the results and code for the MLPerf™ HPC Training v2.0 benchmark.
ck_mlperf_results
CM interface and automation for MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$
hpc_results_v3.0
This repository contains the results and code for the MLPerf™ HPC Training v3.0 benchmark.
github-action
MLCommons CLA bot GitHub Action