thomaswang525 / NVIDIA_Jetson_Inference

This repo contains model compression(using TensorRT) and documentation of running various deep learning models on NVIDIA Jetson Orin, Nano (aarch64 architectures)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About

This repo contains model compression(using TensorRT) and documentation of running various deep learning models on NVIDIA Jetson Orin, Nano (aarch64 architectures)


Languages

Language:Makefile 81.3%Language:C 9.3%Language:C++ 2.9%Language:Roff 2.1%Language:Shell 1.1%Language:CMake 0.8%Language:M4 0.8%Language:Perl 0.7%Language:Python 0.5%Language:DIGITAL Command Language 0.3%Language:D 0.1%Language:Batchfile 0.1%Language:VBScript 0.0%Language:HTML 0.0%Language:JavaScript 0.0%Language:CSS 0.0%Language:DTrace 0.0%