The source code of the CRLFnet.
Env: Ubuntu20.04 + ROS(Noetic) + Python3.x
- If using Google-colab, there is a recommanded environment: CUDA10.2+PyTorch1.6.
- Refer to INSTALL.md for the installation of
OpenPCDet
. - Install
ros_numpy
package mannually: [Source code][Install]
Absolute paths may need your mind:
file path | Line(s) |
---|---|
src/camera_info/get_cam_info.cpp | 26,64,102,140,178,216,254,292,330,368, |
src/LidCamFusion/OpenPCDet/tools/cfgs/custom_models/pointrcnn.yaml | 4,5 |
src/LidCamFusion/OpenPCDet/tools/cfgs/custom_models/pv_rcnn.yaml | 5,6 |
Build project from Dockerfile
:
docker build -t [name]:tag /docker/
or pull image directly:
docker pull gzzyyxy/crlfnet:yxy
This needs ROS to be installed.
cd /ROOT
# launch the site
roslaunch site_model spwan.launch
# launch the vehicles (optional)
woslaunch pkg racecar.launch
This part integrates the Kalman-Filter to real-time radar data.
-
Set
use_cuda
toTrue
insrc/site_model/config/config.yaml
to use GPU. -
Download
yolo_weights.pth
from jbox, and move tosrc/site_model/src/utils/yolo/model_data
.
The steps to run the radar-camera fusion is listed as follows.
For the last command, an optional parameter --save
or -s
is available if you need to save the track of vehicles as images. The --mode
or -m
parameter has three options, which are normal
, off-yolo
and from-save
. The off-yolo
and from-save
modes enable the user to run YOLO seprately to simulate a higher FPS.
#--- AFTER THE SITE LAUNCHED ---#
# run the radar message filter
rosrun site_model radar_listener.py
# run the rad-cam fusion program
cd src/site_model
python -m src.RadCamFusion.fusion [-m MODE] [-s]
The calibration parameters are needed in related camera-data transformation. Once the physical models are modified, update the camera calibration parameters:
#--- AFTER THE SITE LAUNCHED ---#
# get physical parameters of cameras
rosrun site_model get_cam_info
# generate calibration formula according to parameters of cameras
python src/site_model/src/utils/generate_calib.py
This part integrates OpenPCDet
to real-time lidar object detection, refer to CustomDataset.md to find how to proceed with self-product dataset using only raw lidar data.
Configurations for model and dataset need to be specified:
- Model Configs
tools/cfgs/custom_models/XXX.yaml
- Dataset Configs
tools/cfgs/dataset_configs/custom_dataset.yaml
Now pointrcnn.yaml
and pv_rcnn.yaml
are supported.
Create dataset infos before training:
cd OpenPCDet/
python -m pcdet.datasets.custom.custom_dataset create_custom_infos tools/cfgs/dataset_configs/custom_dataset.yaml
File custom_infos_train.pkl
, custom_dbinfos_train.pkl
and custom_infos_test.pkl
will be saved to data/custom
.
Specify the model using YAML files defined above.
cd tools/
python train.py --cfg_file path/to/config/file/
For example, if using PV_RCNN
for training:
cd tools/
python train.py --cfg_file cfgs/custom_models/pv_rcnn.yaml --batch_size 2 --workers 4 --epochs 80
Download pretrained model through these links:
model | time cost | URL |
---|---|---|
PointRCNN | ~3h | Google drive / Jbox |
PV_RCNN | ~6h | Google drive / Jbox |
Prediction on local dataset help to check the result of trainin. Prepare the input properly.
python pred.py --cfg_file path/to/config/file/ --ckpt path/to/checkpoint/ --data_path path/to/dataset/
For example:
python pred.py --cfg_file cfgs/custom_models/pv_rcnn.yaml --ckpt ../output/custom_models/pv_rcnn/default/ckpt/checkpoint_epoch_80.pth --data_path ../data/custom/testing/velodyne/
Visualize the results in rviz, white boxes represents the vehicles.
Follow these steps for only lidar-camera fusion. Some of them need different bash terminals. For the last command, additional parameter --save_result
is required if need to save the results of fusion in the form of image.
#--- AFTER THE SITE LAUNCHED --#
# cameras around lidars start working
python src/site_model/src/LidCamFusion/camera_listener.py
# lidars start working
python src/site_model/src/LidCamFusion/pointcloud_listener.py
# combine all the point clouds and fix their coords
rosrun site_model pointcloud_combiner
# start camera-lidar fusion
cd src/site_model/
python -m src.LidCamFusion.fusion [--config] [--eval] [--re] [--disp] [--printl] [--printm]
Some problems may occurred during debugging.
- Confused: set the batch_size=1 and still out of memory: open-mmlab/OpenPCDet#140
- 段错误(核心已转储) when run dem.py: open-mmlab/OpenPCDet#846
- N > 0 assert faild. CUDA kernel launch blocks must be positive, but got N= 0 when training: open-mmlab/OpenPCDet#945
- raise NotImplementedError, NaN or Inf found in input tensor when training: open-mmlab/OpenPCDet#280
- fix recall calculation bug for empty scene: open-mmlab/OpenPCDet#908
- installation Error " fatal error: THC/THC.h: No such file or directory #include <THC/THC.h> ": open-mmlab/OpenPCDet#1014
- ...
- Welcome to report more issues!