MLCommons's repositories
cm4mlops
A collection of portable, reusable and cross-platform automation recipes (CM scripts) with a human-friendly interface and minimal dependencies to make it easier to build, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data sets, software and hardware (cloud/edge)
modelbench
Run safety benchmarks against AI models and view detailed reports showing how well they performed.
ck
Collective Mind (CM) is a small, modular, cross-platform and decentralized workflow automation framework with a human-friendly interface and reusable automation recipes to make it easier to build, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data, software and hardware
mobile_app_open
Mobile App Open
training_policies
Issues related to MLPerf™ training policies, including rules and suggested changes
modelgauge
Make it easy to automatically and uniformly measure the behavior of many AI Systems.
algorithmic-efficiency
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
inference_results_v4.1
This repository contains the results and code for the MLPerf™ Inference v4.1 benchmark.
mobile_models
MLPerf™ Mobile models
algorithms_results_v0.5
This repository contains the results and code for the AlgoPerf v0.5 benchmark.
inference_results_v4.0
This repository contains the results and code for the MLPerf™ Inference v4.0 benchmark.