Chauffeur: Benchmark Suite for Design and End-to-End Analysis of Self-Driving Vehicles on Embedded Systems
If you use this work, please cite our paper published in ESWEEK-TECS special issue and presented in the International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), 2021.
B. Maity, S. Yi, D. Seo, L. Cheng, S. S. Lim, J. C. Kim, B. Donyanavard, and N. Dutt, "Chauffeur: Benchmark suite for design and end-to-end analysis of self-driving vehicles on embedded systems," ACM Transactions on Embedded Computing Systems (TECS), Oct. 2021.
git clone https://github.com/duttresearchgroup/Chauffeur
cd Chauffeur
git submodule update --init --recursive
- Follow instructions here to make sure docker is installed
- We are using dockers to compile the source code of the micro-benchmarks. Please navigate to the
docker/arm
folder and perform the following steps:
- For Linux(debian):
# Install the qemu packages
sudo apt-get install qemu binfmt-support qemu-user-static
- For Macos
brew install qemu
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes # This step will execute the registering scripts
- Pleaser refer to this link for more details.
sudo apt install docker-compose
cd docker
docker-compose build
: This will prepare the environment to build the applications.- You can check the version of L4T running on the NVidia board with jtop.
- This is the most time-consuming step of the process. Remember to grab your coffee at this point. We are downloading all the necessary tools so that you don't have to compile on the board.
-
docker-compose up
: This will utilize the compilers inside the build environment to produce the binaries. By default it calls thebuild.sh
script which builds the applications one by one and produces the final binaries in aapplications/bin
folder. Once the applications are generated, you are ready to run them! -
[Debug] For using an interactive debugging environment, please run
docker-compose run Chauffeur.builder bash
source scripts/envs.sh
bash scripts/APP_NAME/build.sh [tx2/px2>]
(e.g., bash scripts/darknet_ros/build.sh tx2)
See the wiki for more information.
- This step is only required if you are cross-compiling Chauffeur
- Please ensure that
rsync
is installed in both host and target, and additionallysshpass
is installed on host. - In
scripts/envs.sh
modify the remote credentials where you want to deploy. - Create a file called
scripts/passwd
to store the ssh password. - Next using
bash
shell execute the following:
scripts/send.sh cross-apps/ applications/
scripts/send.sh scripts/ scripts/
scripts/send.sh data/ data/
source scripts/envs.sh
- In
scripts
folder we have include the relevant launching scriptrun.sh
for each application. For example, to run application kalman_filter,sh scripts/kalman_filter/run.sh
- For cross compiled environment, pass
cross
as an argument to the file. Example:sh scripts/kalman_filter/run.sh cross
. - Be careful about running multiple instances of the same app. Only one instance of each app should be running at a time. Use
sudo kill
to end a previous instance of an app before running a new one.
For cross-compiling, we also installed the following libraries on the target with the provided packages folder:
cd Chauffeur
pip3 install -r packages/requirements.txt
sudo apt-get update
xargs sudo apt-get install <packages/apt_requirements.txt
More information for required packages can be found here.
For running instances of the end-to-end pipeline consisting of Chauffeur applications, we support a python based script.
cd Chauffeur/scripts/end-to-end
pip3 install -r requirements.txt
python3 runner.py
- NVIDIA Jetson TX2 : Tested with Jetpack 4.2.1, L4T 32.2.0]
- NVIDIA Drive PX2 : No cross-compiler support, Tested with NVIDIA DRIVE OS 4.9.80-rt61-tegra
- Follow instructions here to install NVIDIA Container Toolkit.
Make sure the base image in Dockerfile is compatible with the NVIDIA Driver and CUDA varsion on your host machine.
- Use
nvidia-smi
to check the CUDA version. - Base on the CUDA version, go to here to choose the base image of your docker container here.
- Navigate to
docker/x86
, use the text editor to modify the first line ofDockerfile
, modify the base image you just choose
cd Chauffeur
cp docker/x86/Dockerfile ./Dockerfile
docker build . -t x86.runner
- Use following command to run the container and get in to the container's bash:
docker run -it --gpus all -v $(pwd)/logs:/workspace/logs x86.runner /bin/bash
- We use cuda-lane-detection as testing example:
- Navigate to
/workspace/scripts/lane_detection/cuda-lane-detection
in the docker container. - Run
./run.sh
- The terminal should successfully run the cuda-lane-detection one time without error messages related to CUDA.
- Navigate to
docker run -it --gpus all -v $(pwd)/logs:/workspace/logs x86.runner
- Now, the runner is ready for accepting inputs from the user. Users can select workloads by typing ctrl+c and then followed either by a number(0-8) for selecting the workload, or the character 'a' for launching.
- You might suffer a lot because the compatibilty of the NVIDIA GPUs. You can check whether you match CUDA arch and CUDA gencode for your NVIDIA GPU architecture. You can see here for more information.
- If you fail to match the architecture, you need to go to source code of the applicatoin, and change the compilation flag, and compile the application again.
Application | Parallelism | Framework |
---|---|---|
cuda-lane-detection | Data-level | OpenCV (TBB, pthreads) |
darknet-ros | Thread-level | C++ |
floam | Thread-level | C++ |
hybrid-astar | None | None |
jetson-inference | None | None |
kalman-filter | None | None |
openMVG | Data-level | OpenMP |
lidar-tracker | Data-level | OpenCV (TBB, pthreads) |
orb-slam-3 | Thread-level | C++ |
- jetson-inference @ e4ebc40967604945fd501b8d35ed0b9e86ca8b2d
- floam @ de361346020575bd89d32eac969614bc2c72d17c
- cuda-sfm @ 2e3dcdfeb959426ba897358471f4bee7d9c99b79