calad0i / HGQ

High Granularity Quantizarion for Ultra-Fast Machine Learning Applications on FPGAs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

HGQ-logo

High Granularity Quantization

License Apache 2.0 Documentation PyPI version

HGQ is a framework for quantization aware training of neural networks to be deployed on FPGAs, which allows for per-weight and per-activation bitwidth optimization.

Depending on the specific application, HGQ could achieve up to 10x resource reduction compared to the traditional AutoQkeras approach, while maintaining the same accuracy. For some more challenging tasks, where the model is already under-fitted, HGQ could still improve the performance under the same on-board resource consumption. For more details, please refer to our paper (link coming not too soon).

This repository implements HGQ for tensorflow.keras models. It is independent of the QKeras project.

Warning:

This framework requires an unmerged PR of hls4ml. Please install it by running pip install "git+https://github.com/calad0i/hls4ml@HGQ-integration". Or, conversion will fail with unsupported layer error.

This package is still under development. Any API might change without notice at any time!

About

High Granularity Quantizarion for Ultra-Fast Machine Learning Applications on FPGAs

License:Apache License 2.0


Languages

Language:Python 100.0%