cTuning foundation (founding member of MLCommons)'s repositories
ck-artifact-evaluation
Collective Knowledge repository to support artifact evaluation and reproducibility initiatives:
ck-quantum
Miscellaneous resources for Quantum Collective Knowledge
ck-guide-images
Images for CK documentation
cm4research
CM interface and automation recipes to access, manage, prepare, run and reproduce research projects from AI, ML and Systems conferences
ck-website
CK repository for cKnowledge.org website:
ck-tbd-suite
Prototyping CK workflows for ML training
ck-wa-extra
Extra resources in the Collective Knowledge Format for ARM's Workload Automation Framework:
ck_mlperf_results
Outdated
mlcommons-ck
Note that from 20240321 we use "dev" branch of mlcommons@ck . This is a development fork of the MLCommons CM workflow automation framework:
mlperf_inference_submissions_v3.0
MLPerf inference submissions v3.0 playground
mlperf_inference_submissions_v3.1
Community submission to MLPerf inference v3.1
mlperf_inference_submissions_v3.1a
Community submission to MLPerf inference v3.1 part 1
submissions_tiny_v1.1_by_taskforce_on_auto_and_repro
Automation and Reproducibility Study for TinyMLPerf
ck-env-2023-arc
CK repository with components and automation actions to enable portable workflows across diverse platforms including Linux, Windows, MacOS and Android. It includes software detection plugins and meta packages (code, data sets, models, scripts, etc) with the possibility of multiple versions to co-exist in a user or system environment:
cm4mlops
A collection of reusable and cross-platform automation recipes (CM scripts) with a human-friendly interface and minimal dependencies to make it easier to build, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data sets, software and hardware (cloud/edge)
energyrunner
The EEMBC EnergyRunner application framework for the MLPerf Tiny benchmark.
inference_results_v3.0
This repository contains the results and code for the MLPerf™ Inference v3.0 benchmark.
inference_results_v3.1
This repository contains the results and code for the MLPerf™ Inference v3.1 benchmark.
neuralmagic-inference
Reference implementations of MLPerf™ inference benchmarks
octoml-inference
Reference implementations of MLPerf™ inference benchmarks
scc23-benchmarking
Instructions for submitting the benchmark results for the student cluster competition at SC23
training_results_v2.1
This repository contains the results and code for the MLPerf™ Training v2.1 benchmark.
Victima
Source code and scripts for Artifact Evaluation of the upcoming MICRO 2023 paper Victima.