Complie PyTorch Bindings with float32 precision.
srxdev0619 opened this issue · comments
ShahRukh Athar commented
Hi,
First up, thanks for this is great work! I was just wondering if there's a way to compile tiny-cuda-nn
and its pytorch bindings to use float32
by default? Thanks!
Thomas Müller commented
Hi there, you can go into include/tiny-cuda-nn/common.h
and change
#define TCNN_HALF_PRECISION (!(TCNN_MIN_GPU_ARCH == 61 || TCNN_MIN_GPU_ARCH <= 52))
to
#define TCNN_HALF_PRECISION 0
Cheers!
ShahRukh Athar commented
Worked perfectly, thanks!