The documentation here is intended to help customers build the Open Source DeepStream Dockerfiles. This information is useful for both x86 systems with dGPU setup and on NVIDIA Jetson devices.
With DS 6.2, DeepStream docker containers do not package libraries necessary for certain multimedia operations like audio data parsing, CPU decode, and CPU encode. This change could affect processing certain video streams/files like mp4 that include audio tracks.
Please run the below script inside the docker images to install additional packages that might be necessary to use all of the DeepStreamSDK features :
/opt/nvidia/deepstream/deepstream/user_additional_install.sh
Please refer to the Prerequisites section at DeepStream NGC page NVIDIA NGC to build and run deepstream containers.
- Please download the DeepStreamSDK release x86 tarball and place it locally
in the
$ROOT/x86_64
folder of this repository.
cp deepstream_sdk_v6.2.0_x86_64.tbz2 x86_64/
-
image_url
is the desired docker name:TAG -
ds_pkg
andds_pkg_dir
shall be the tarball file-name with and without the tarball extension respectively. Refer to Section 2.1.3 x86 Build Command for sample command. -
base_image
is the desired container name. Please feel free to use the sample name provided in the command above. This name is used in the triton build steps alone. Refer to Section 2.1.3 x86 Build Command for sample command.
Note: These packages (uff-converter-tf and graphsurgeon-tf) are now included by default.
2.1.2.2 For x86 devel, base, samples and iot Dockers the TensorRT 8.5.2.2 install is required for the Docker builds
Download file link: nv-tensorrt-local-repo-ubuntu2004-8.5.2-cuda-11.8_1.0-1_amd64.deb from TensorRT download page.
Note: You may have to login to developer.nvidia.com to download the file.
Quick Steps:
$ROOT is the root directory of this git repo.
cd $ROOT/x86_64
cp nv-tensorrt-local-repo-ubuntu2004-8.5.2-cuda-11.8_1.0-1_amd64.deb x86_64/
Docker ADD method is used by default for ease to building the x86 Dockers. Docker ADD also increases the image size by approximately 2GB.
To workaround this problem you can host the nv-tensorrt-local-repo-ubuntu2004-8.5.2-cuda-11.8_1.0-1_amd64.deb on a server and pull it in during the docker build using wget. This section of code is commented out by default.
To use this wget method you can uncomment this code section of the code in the dockerfile. Then also add the complete URL to the file.
\# install TensorRT repo from a hosted file on a server
The comment out this code section in the dockerfile that starts with this.
\# Add TensorRT repo
The following are EXCLUDED from the x86 devel docker file: graph composer and Vulkan drivers (needed by graph composer).
sudo image_url=deepstream:6.2.0-devel-local \
ds_pkg=deepstream_sdk_v6.2.0_x86_64.tbz2 \
ds_pkg_dir=deepstream_sdk_v6.2.0.0_x86_64/ \
base_image=dgpu-any-custom-base-image make -f Makefile devel -C x86_64/
sudo image_url=deepstream:6.2.0-triton-local \
ds_pkg=deepstream_sdk_v6.2.0_x86_64.tbz2 \
ds_pkg_dir=deepstream_sdk_v6.2.0.0_x86_64/ \
base_image=dgpu-any-custom-base-image make -f Makefile_x86_triton triton-devel -C x86_64/
There is an example build script is called $ROOT/buildx86.sh with the same contents.
sudo image_url=deepstream:6.2.0-devel-local \
ds_pkg=deepstream_sdk_v6.2.0_x86_64.tbz2 \
ds_pkg_dir=deepstream_sdk_v6.2.0.0_x86_64/ \
base_image=dgpu-any-custom-base-image make -f Makefile devel -C x86_64/
sudo image_url=deepstream:6.2.0-base-local \
ds_pkg=deepstream_sdk_v6.2.0_x86_64.tbz2 \
ds_pkg_dir=deepstream_sdk_v6.2.0.0_x86_64/ \
base_image=dgpu-any-custom-base-image make -f Makefile base -C x86_64/
sudo image_url=deepstream:6.2.0-samples-local \
ds_pkg=deepstream_sdk_v6.2.0_x86_64.tbz2 \
ds_pkg_dir=deepstream_sdk_v6.2.0.0_x86_64/ \
base_image=dgpu-any-custom-base-image make -f Makefile runtime -C x86_64/
sudo image_url=deepstream:6.2.0-iot-local \
ds_pkg=deepstream_sdk_v6.2.0_x86_64.tbz2 \
ds_pkg_dir=deepstream_sdk_v6.2.0.0_x86_64/ \
base_image=dgpu-any-custom-base-image make -f Makefile test5 -C x86_64/
Must be built on a Jetson device (e.g. Orin).
Please refer to the Prerequisites section at DeepStream NGC page NVIDIA NGC to build and run deepstream containers.
Download DeepStreamSDK tarball from DeepStreamSDK release Jetson tarball and place it locally
in the $ROOT/jetson
folder of this repository.
cp deepstream_sdk_v6.2.0_jetson.tbz2 jetson/
sudo image_url=deepstream-l4t:6.2.0-triton-local \
ds_pkg=deepstream_sdk_v6.2.0_jetson.tbz2 \
ds_pkg_dir=deepstream_sdk_v6.2.0_jetson \
base_image=jetson-any-custom-base-image make -f Makefile triton -C jetson/
There is an example build script is called $ROOT/buildjet.sh with the same contents.
sudo image_url=deepstream-l4t:6.2.0-triton-local \
ds_pkg=deepstream_sdk_v6.2.0_jetson.tbz2 \
ds_pkg_dir=deepstream_sdk_v6.2.0_jetson \
base_image=jetson-any-custom-base-image make -f Makefile triton -C jetson/
sudo image_url=deepstream-l4t:6.2.0-base-local \
ds_pkg=deepstream_sdk_v6.2.0_jetson.tbz2 \
ds_pkg_dir=deepstream_sdk_v6.2.0_jetson \
base_image=jetson-any-custom-base-image make -f Makefile base -C jetson/
sudo image_url=deepstream-l4t:6.2.0-samples-local \
ds_pkg=deepstream_sdk_v6.2.0_jetson.tbz2 \
ds_pkg_dir=deepstream_sdk_v6.2.0_jetson \
base_image=jetson-any-custom-base-image make -f Makefile runtime -C jetson/
sudo image_url=deepstream-l4t:6.2.0-iot-local \
ds_pkg=deepstream_sdk_v6.2.0_jetson.tbz2 \
ds_pkg_dir=deepstream_sdk_v6.2.0_jetson \
base_image=jetson-any-custom-base-image make -f Makefile test5 -C jetson/