jkjung-avt / tensorrt_demos

TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet

Home Page:https://jkjung-avt.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Process killed in onnx_to_tensorrt.py Demo#5

pNAIA opened this issue · comments

commented

Demo #5 Step #5
$ python3 onnx_to_tensorrt.py -m yolov4-416
.......
[TensorRT] VERBOSE: Graph construction and optimization completed in 1.30692 seconds.
Killed

Fix this by ensuring enough swap file size. Please follow the below steps & add it to the list of steps @jkjung-avt. Apologies if you have already mentioned it in your exhaustive series of steps.

check for current swap size

free -m

Disable ZRAM:

sudo systemctl disable nvzramconfig

Create 4GB swap file

sudo fallocate -l 4G /mnt/4GB.swap
sudo chmod 600 /mnt/4GB.swap
sudo mkswap /mnt/4GB.swap

Append the following line to /etc/fstab (chmod this file if access issues are shown)

sudo echo "/mnt/4GB.swap swap swap defaults 0 0" >> /etc/fstab

REBOOT!

Reference: https://courses.nvidia.com/courses/course-v1:DLI+S-RX-02+V2/info

Now go ahead and run!
$ python3 onnx_to_tensorrt.py -m yolov4-416

Thanks,
Arun
pNaia Tech

Thanks for the suggestion. I've added a link in README to this issue.

Hello! Even after doing all these steps above, I got killed error... Do you know what else I can do with it?

@dashos18 What platform are you using?

Jetson Nano.
I used your code before for Jetson Xavier and it worked amazing! However, with Nano it is a bit tricky for me.

I'm able run the code on my Jetson Nano DevKit for all yolo models mentioned in the README.md. I don't know why it doesn't work for you.

As a last resort, you might try to conserve system memory by going into "text mode" (i.e. freeing up system memory consumed by the graphical interface) before executing "onnx_to_tensorrt.py": #386 (comment).

Thanks a lot!

thanks!