This is one graduation design named "Visual Perception and Recogntion for Autonomous Driving".
The code requires Python 2.7, Tensorflow 1.0, as well as the following python libraries:
- matplotlib
- numpy
- Pillow
- scipy
- runcython
- commentjson
Those modules can be installed using: pip install numpy scipy pillow matplotlib runcython commentjson
or pip install -r requirements.txt
.
- Clone this repository:
https://github.com/MPIG/Visual-Perception-and-Recognition-for-Autonomous-Driving.git
- Initialize all submodules:
git submodule update --init --recursive
cd submodules/KittiBox/submodules/utils/ && make
to build cython code- [Optional] Download Kitti Road Data:
- Retrieve kitti data url here: http://www.cvlibs.net/download.php?file=data_road.zip
- Call
python download_data.py --kitti_url URL_YOU_RETRIEVED
- [Optional] Run
cd submodules/KittiBox/submodules/KittiObjective2/ && make
to build the Kitti evaluation code (see submodules/KittiBox/submodules/KittiObjective2/README.md for more information)
Running the model using demo.py
only requires you to perform step 1-3. Step 4 and 5 is only required if you want to train your own model using train.py
. Note that I recommend using download_data.py
instead of downloading the data yourself. The script will also extract and prepare the data. See Section Manage data storage if you like to control where the data is stored.
- Pull all patches:
git pull
- Update all submodules:
git submodule update --init --recursive
If you forget the second step you might end up with an inconstant repository state. You will already have the new code for MultiNet but run it old submodule versions code. This can work, but I do not run any tests to verify this.
Run: python demo.py --gpus 0 --input data/demo/um_000005.png
to obtain a prediction using demo.png as input.
Run: python evaluate.py
to evaluate a trained model.
Run: python train.py --hypes hypes/model3.json
to train.
The model is controlled by the file hypes/model3.json
. This file points the code to the implementation of the submodels. The MultiNet code then loads all models provided and integrates the decoders into one neural network. To train on your own data, it should be enough to modify the hype files of the submodels. A good start will be the KittiSeg model, which is very well documented.
"models": {
"segmentation" : "../submodules/KittiSeg/hypes/KittiSeg.json",
"detection" : "../submodules/KittiBox/hypes/kittiBox.json",
},