NVIDIA-AI-IOT / torch2trt

An easy to use PyTorch to TensorRT converter

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is that available on orin nano?

adrenalin7237 opened this issue · comments

I have a jetson Orin Nano model.
I've tried to install torch2trt in my jetson device using docker system.
Here is my error log during installation.

root@ubuntu:/home/torch2trt# python3 setup.py install
Traceback (most recent call last):
File "setup.py", line 24, in
plugins_ext_module = CUDAExtension(
File "/usr/local/lib/python3.8/dist-packages/torch/utils/cpp_extension.py", line 1077, in CUDAExtension
library_dirs += library_paths(cuda=True)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/cpp_extension.py", line 1204, in library_paths
if (not os.path.exists(_join_cuda_home(lib_dir)) and
File "/usr/local/lib/python3.8/dist-packages/torch/utils/cpp_extension.py", line 2419, in _join_cuda_home
raise OSError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
root@ubuntu:/home/torch2trt#

  • I'm using docker system compatitable to CUDA 11.4
    used image : dustynv/ros:humble-desktop-pytorch-l4t-r35.4.1
    root@ubuntu:/home/torch2trt# nvcc -V
    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2022 NVIDIA Corporation
    Built on Sun_Oct_23_22:16:07_PDT_2022
    Cuda compilation tools, release 11.4, V11.4.315
    Build cuda_11.4.r11.4/compiler.31964100_0

  • And I tried to fix environment variable in bashrc file

  • Then this is my export variables.
    declare -x AMENT_PREFIX_PATH="/colcon_ws/install/image_test:/colcon_ws/install/usb_cam:/opt/ros/humble/install"
    declare -x CMAKE_PREFIX_PATH="/colcon_ws/install/usb_cam:/opt/ros/humble/install"
    declare -x COLCON_PREFIX_PATH="/colcon_ws/install:/opt/ros/humble/install"
    declare -x COLORTERM="truecolor"
    declare -x CUDA_HOME="/usr/local/cuda"
    declare -x DEBIAN_FRONTEND="noninteractive"
    declare -x DISPLAY=":1"
    declare -x HOME="/root"
    declare -x HOSTNAME="ubuntu"
    declare -x LANG="en_US.UTF-8"
    declare -x LD_LIBRARY_PATH="/colcon_ws/install/usb_cam/lib:/opt/ros/humble/install/opt/rviz_ogre_vendor/lib:/opt/ros/humble/install/lib:/usr/local/cuda/lib64:"
    declare -x LD_PRELOAD="/usr/lib/aarch64-linux-gnu/libgomp.so.1"
    declare -x LS_COLORS="rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:.tar=01;31:.tgz=01;31:.arc=01;31:.arj=01;31:.taz=01;31:.lha=01;31:.lz4=01;31:.lzh=01;31:.lzma=01;31:.tlz=01;31:.txz=01;31:.tzo=01;31:.t7z=01;31:.zip=01;31:.z=01;31:.dz=01;31:.gz=01;31:.lrz=01;31:.lz=01;31:.lzo=01;31:.xz=01;31:.zst=01;31:.tzst=01;31:.bz2=01;31:.bz=01;31:.tbz=01;31:.tbz2=01;31:.tz=01;31:.deb=01;31:.rpm=01;31:.jar=01;31:.war=01;31:.ear=01;31:.sar=01;31:.rar=01;31:.alz=01;31:.ace=01;31:.zoo=01;31:.cpio=01;31:.7z=01;31:.rz=01;31:.cab=01;31:.wim=01;31:.swm=01;31:.dwm=01;31:.esd=01;31:.jpg=01;35:.jpeg=01;35:.mjpg=01;35:.mjpeg=01;35:.gif=01;35:.bmp=01;35:.pbm=01;35:.pgm=01;35:.ppm=01;35:.tga=01;35:.xbm=01;35:.xpm=01;35:.tif=01;35:.tiff=01;35:.png=01;35:.svg=01;35:.svgz=01;35:.mng=01;35:.pcx=01;35:.mov=01;35:.mpg=01;35:.mpeg=01;35:.m2v=01;35:.mkv=01;35:.webm=01;35:.ogm=01;35:.mp4=01;35:.m4v=01;35:.mp4v=01;35:.vob=01;35:.qt=01;35:.nuv=01;35:.wmv=01;35:.asf=01;35:.rm=01;35:.rmvb=01;35:.flc=01;35:.avi=01;35:.fli=01;35:.flv=01;35:.gl=01;35:.dl=01;35:.xcf=01;35:.xwd=01;35:.yuv=01;35:.cgm=01;35:.emf=01;35:.ogv=01;35:.ogx=01;35:.aac=00;36:.au=00;36:.flac=00;36:.m4a=00;36:.mid=00;36:.midi=00;36:.mka=00;36:.mp3=00;36:.mpc=00;36:.ogg=00;36:.ra=00;36:.wav=00;36:.oga=00;36:.opus=00;36:.spx=00;36:*.xspf=00;36:"
    declare -x NVIDIA_DRIVER_CAPABILITIES="all"
    declare -x NVIDIA_VISIBLE_DEVICES="all"
    declare -x OLDPWD="/home"
    declare -x OPENBLAS_CORETYPE="ARMV8"
    declare -x PATH="/opt/ros/humble/install/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
    declare -x PKG_CONFIG_PATH="/opt/ros/humble/install/lib/aarch64-linux-gnu/pkgconfig:/opt/ros/humble/install/lib/pkgconfig"
    declare -x PWD="/home/torch2trt"
    declare -x PYTHONIOENCODING="utf-8"
    declare -x PYTHONPATH="/colcon_ws/install/ros2_trt_pose/lib/python3.8/site-packages:/colcon_ws/install/image_test/lib/python3.8/site-packages:/opt/ros/humble/install/lib/python3.8/site-packages"
    declare -x QT_X11_NO_MITSHM="1"
    declare -x RMW_IMPLEMENTATION="rmw_cyclonedds_cpp"
    declare -x ROS_DISTRO="humble"
    declare -x ROS_LOCALHOST_ONLY="0"
    declare -x ROS_PYTHON_VERSION="3"
    declare -x ROS_ROOT="/opt/ros/humble"
    declare -x ROS_VERSION="2"
    declare -x SHELL="/bin/bash"
    declare -x SHLVL="2"
    declare -x TERM="xterm-256color"
    declare -x TERMINATOR_DBUS_NAME="net.tenshu.Terminator23558193cd9818af7fe4d2c2f5bd9d00f"
    declare -x TERMINATOR_DBUS_PATH="/net/tenshu/Terminator2"
    declare -x TERMINATOR_UUID="urn:uuid:b27fe63b-28da-4685-91bd-c1b4199a334e"
    declare -x TORCH_HOME="/data/models/torch"
    declare -x VTE_VERSION="6003"

Despite trying this, the installation still fails. can you help me?
And is it possible that this module cannot be used in orin nano?

I have the same problem! Looking forward to the answer

Hi, I have fix this error. You can check torch.cuda.is_available(). In my situation it return false. So I reinstall torch in this link: https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048. Check torch.cuda.is_available() again and if it return true, this error has fixed