Fast Machine Learning Lab's repositories
hls4ml-tutorial
Tutorial notebooks for hls4ml
qonnx_model_zoo
Model zoo for the Quantized ONNX (QONNX) model format
hls4ml-live-demo
Live demo of hls4ml on embedded platforms such as the Pynq-Z2
SuperSONIC
Server infrastructure for GPU inference-as-a-service in large scientific experiments
build_triton
Build Triton inference server container for CMS
onnxruntime_backend
The Triton backend for the ONNX Runtime.
physical-cartpole
Work related to the Inverted Pendulum Hardware
pytorch_backend
The Triton backend for the PyTorch TorchScript models.
qkeras
QKeras: a quantization deep learning library for Tensorflow Keras
RecoBTag-Combined
RecoBTag-Combined
RN0X_Pokemon
Tiny, Quantized Neural Network, orignially based on ResNet8, trained to recognize pokemon
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
wa-hls4ml-paper
Code for plots, models, data generation and other utilities relating to the paper "wa-hls4ml: A Benchmark and Surrogate Models for hls4ml Resource and Latency Estimation"