There are 1 repository under mlperf-inference topic.
Collective Knowledge (CK), Collective Mind (CM/CMX) and MLPerf automations: community-driven projects to facilitate collaborative and reproducible research and to learn how to run AI, ML, and other emerging workloads more efficiently and cost-effectively across diverse models, datasets, software, and hardware using MLPerf methodology and benchmarks
TinyNS: Platform-Aware Neurosymbolic Auto Tiny Machine Learning
AML's goal is to make benchmarking of various AI architectures on Ampere CPUs a pleasurable experience :)
This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.
A benchmark suite to used to compare the performance of various models that are optimized by Adlik.
Development version of CodeReefied portable CK workflows for image classification and object detection. Stable "live" versions are available at CodeReef portal:
MLPerf explorer beta
These are automated test submissions for validating the MLPerf inference workflows