Claydon-Wang / Addernet-CUDA

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training addernet accelerated by CUDA

The original AdderNet Repo considers using PyTorch for implementing add absed convolution, however it remains slow and requires much more runtime memory costs as compared to the variant with CUDA acceleration.

This repository is partially referenced to shiftaddernet.

You can compile the Folder of "adder" to obtain CUDA vision of "adder2D", which can replace "Conv2D" with more efficient use of hardware.

"adder2D-CUDA" can compress over ~10x training time than non-CUDA version of "adder2D".

About


Languages

Language:Python 85.2%Language:Cuda 11.5%Language:C++ 2.4%Language:CMake 0.9%