BachiLi / redner

Differentiable rendering without approximation.

Home Page:https://people.csail.mit.edu/tzumao/diffrt/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RTX 30 Series Compatibility

Yozey opened this issue · comments

commented

Hi,

I've been using PyRedner for a while and I found it very helpful. Thanks!

Recently I upgraded my GPU from 2080Ti to 3090. With the same code and identical Conda environment, I got an error on 3090 but not on 2080Ti:

RuntimeError: Function "RTPresult _rtpModelUpdate(RTPmodel, unsigned int)" caught exception: Encountered a CUDA error: radix_sort_temp_size -> cub::DeviceRadixSort::SortPairs returned (8): invalid device function

With some research online, I supposed that this is due to the old version of OptiX (5.1.1). So therefore the 30Series is not supported? Is there any workaround for this problem?

(I noticed that the OptiX version needs to be older than 6.5 to compile PyRedner. Can we use the latest OptiX version?)

I'd like to follow up on this as I'm running into it too and think a good number of users will start to as well.

commented

I had the same problem when trying to run on colab instance with A100

I'm having the same issue! Has anyone been able to find a solution?

I'm having the same issue. I'm working with RTX 3090 and pytorch built with CUDA 11. Is it possible that redner is built with CUDA 10? Pytorch will not let me downgrade CUDA to 10 and complains that the GPU is not compatible.
Thanks.

I'm having the same issue. I'm working with RTX 3080 Laptop GPU and pytorch built with CUDA 11.3

commented

same issue with RTX 3090Ti pytorch==1.12.0

redner is currently using Optix Prime and version 5.1 is deprecated for RTX GPUs.
Last version of Optix that had Optix Prime for RTX GPUs seems to be 6.5.

Optix Prime doesn't take advantage of RTX capable Optix implementations that are shipped part of RTX drivers, hence renders will be about 10x slower compared to psdr-cuda or mitsuba-2.

If anyone may still be interested, I would suggest updating this build files to migrate to using the latest Optix Prime version supported on RTX GPUs (6.5).

I have a fork here I was able to build on Windows with RTX 3080 Ti:
https://github.com/leventt/redner

You can see what I changed here:
master...leventt:redner:master

You would have to adapt for Linux and place the binary dependencies for Optix 6.5 under an redner-dependencies/optix folder at root. but I only tried on Windows.
You can download Optix 6.5 here:
https://developer.nvidia.com/designworks/optix/downloads/legacy

You can build wheels by running this under root:
pip wheel -w dist --verbose .

(I will make a PR to @BachiLi soon)

commented

@leventt I have tested using Optix 6.5 instead of mater version(https://github.com/BachiLi/redner)
ubuntu==20.04 LTS pytorch==1.11.0+cu113 RTX3090Ti
and a new error occurred

scene = redner.Scene(camera, RuntimeError: Function "RTPresult _rtpModelUpdate(RTPmodel, unsigned int)" caught exception: Encountered a CUDA error: cudaEventRecord( m_eventEnd, stream ) returned (700): an illegal memory access was encountered

commented

@leventt using your version (https://github.com/leventt/redner) in ubuntu==20.04 LTS pytorch==1.11.0+cu113 RTX3090Ti

File "/home/ubuntu/anaconda3/lib/python3.8/site-packages/pyredner/render_pytorch.py", line 609, in unpack_args scene = redner.Scene(camera, RuntimeError: Function "RTPresult _rtpModelUpdate(RTPmodel, unsigned int)" caught exception: Encountered a CUDA error: cudaEventRecord( m_eventEnd, stream ) returned (700): an illegal memory access was encountered

@ForrestPi Are you perhaps running out of memory? Can you share a snippet that recreates this for you?

I am asking but I am most likely not going to try fixing this for you. Perhaps someone else may advice. I am just pointing out that I can run redner on a 3080 Ti with #187

@ForrestPi @leventt Hello there! I am facing the same question when I try to install Redner on Linux 3090 based on @leventt version (https://github.com/leventt/redner) and it indeed installs successfully. But when I try to use redner there comes the error
RuntimeError: Function "RTPresult _rtpModelUpdate(RTPmodel, unsigned int)" caught exception: Encountered a CUDA error: cudaEventRecord( m_eventEnd, stream ) returned (700): an illegal memory access was encountered
It happens whenever I set a small batch_size or even only rendering one img.
Have you figured out how to fix or solve this question?

I was able to compile for python 3.9 and cuda 11.6
i didn't test it yet.

here it is

redner-0.4.28-cp39-cp39-win_amd64.zip

but I was haing the same _rtpModelUpdate problem as others where having.

I just found why it doesnt work.
Optix prime wont work for RTX30
https://forums.developer.nvidia.com/t/optix-6-5-prime-samples-fail-with-rtx-3080/177078

here is another version for python 3.9 without cuda
redner-0.4.28-cp39-cp39-win_amd64.zip