pytorch / glow

Compiler for Neural Network hardware accelerators

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Compile model into openCL backend

Theologis opened this issue · comments

Hi,
I am curios to know, if it is possible to compile a model and have openCL code instead of the .o, .h files.

Thank you in advance.

The existing OpenCL backend does not generate a self-contained OpenCL code in the source or binary form like e.g. we do for the CPUBackend producing .o files. It is acting more like an interpreter and does not do any AOT compilation. It simply goes over the IRFunction's instructions and directly enqueues the required kernels for execution by OpenCL.

In principle, it should be quite possible to take the existing OpenCL backend as a basis and create another backend which would have a very similar logic, but instead of directly invoking OpenCL kernels it would use AOT and generate the C/C++ source code for invoking those kernels. In some sense, it would make this backend similar to the CPUBackend with the difference that it produces a source code instead of the LLVM IR or object files. BTW, instead of the source code one could also produce LLVM IR if needed.

@opti-mix Thank you very match for your response ! Actually I was trying to see how GLOW makes this partitions and fusions that improve performance. OpenCL code would help me to understand such optimizations. Also it would be easier to make targets for CPUs/GPUs that are not supported by LLVM and OpenCL kernel synthesis using HLS tools.