NVlabs / tiny-cuda-nn

Lightning fast C++/CUDA neural network framework

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Complie PyTorch Bindings with float32 precision.

srxdev0619 opened this issue · comments

Hi,

First up, thanks for this is great work! I was just wondering if there's a way to compile tiny-cuda-nn and its pytorch bindings to use float32 by default? Thanks!

Hi there, you can go into include/tiny-cuda-nn/common.h and change

#define TCNN_HALF_PRECISION (!(TCNN_MIN_GPU_ARCH == 61 || TCNN_MIN_GPU_ARCH <= 52))

to

#define TCNN_HALF_PRECISION 0

Cheers!

Worked perfectly, thanks!