aranga81 / Udacity-CapstoneProject

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SDCND Capstone Project

This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.

Team members:

Names Email Addresses
Saimir Baci (TL) saimirbaci@hotmail.com
Selçuk Çavdar cavdarselcuk@gmail.com
Kadir Haspalamutgil kadirhas@sabanciuniv.edu
Adithya Ranga aranga@umich.edu
Dibakar Sigdel sigdeldkr@gmail.com

Installation and Setup:

Please use one of the two installation options, either native or docker installation.

Native Installation

  • Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.

  • If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:

    • 2 CPU
    • 2 GB system memory
    • 25 GB of free hard drive space

    The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.

  • Follow these instructions to install ROS

  • Dataspeed DBW

  • Download the Udacity Simulator.

Docker Installation

Install Docker

Build the docker container

docker build . -t capstone

Run the docker file

docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone

Port Forwarding

To set up port forwarding, please refer to the instructions from term 2

Usage

  1. Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
  1. Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
  1. Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
  1. Run the simulator

Real world testing

  1. Download training bag that was recorded on the Udacity self-driving car.
  2. Unzip the file
unzip traffic_light_bag_file.zip
  1. Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
  1. Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
  1. Confirm that traffic light detection works on real life images =======

Implementation Details:

Waypoint Updater:

Planning part of this capstone comprises of the waypoint loader and updater nodes. The waypoint updater node publishes the next 100 waypoints in front of the car, maintaining a target velocity and constantly updating at 35 Hz.

  • The updater node has base waypoints, current pose of the ego vehicle and traffic light waypoint information as inputs and it publishes the final waypoints.
  • We calculate the closest waypoint index from all the waypoints using the KDTree search logic. Base waypoints from this closest index to the farthest given the number of lookahead points are calculated.
  • Traffic light detector node determines closest traffic light and its state. If we have a red signal the waypoint velocities are updated. decelerate_wps function calculates the distance from the base waypoints to stop line and velocity is adjusted as function of distance and maximum deceleration rate.

Drive by Wire (DBW) node:

DBW node is responsible of controlling the vehicle to follow the given trajectory. This node receives the current states of the vehicle and publishes steering, throttle and brake commands if there is no interruption from the driver. For the throttle, a PI controller is used, with a maximum throttle limitation to avoid sudden accelerations. The brake is controlled according to necessary torque to provide desired deceleration, which is estimated using vehicle's parameters. Steering is calculated according to desired angular velocity, which is given by a pure pursuit algorithm in the waypoint follower node. The controller loop is running at 50 Hz.

Traffic light detection:

The node subscribes image and traffic lights messages, and publishes the waypoints and the traffic light state as topics. For the detection of the traffic lights we are training a network based on the SSD architecture. SSD architecture has some advantages since it detects in one stage, and has low inference overhead. We used transfer learning to train the network. For this we used the Tensorflow Object Detection API, and pretrained network on the COCO dataset. The next part was to define the dataset for training the network, and currently we used a dataset that was build from previous students, for both scenarios, in the simulator and the udacity site. But what we noticed in the simulator scenario, it was not good at detecting the lights from far away, for this we annotated some more data for it, which helped improve the quality of the traffic light detector. The next part was to compute the distance of the vehicle from the next upcoming traffic light, in this we run inference only when close to the traffic light. Based on the detected light, the next waypoint is published.

About

License:MIT License


Languages

Language:CMake 38.5%Language:Python 35.0%Language:C++ 25.3%Language:Dockerfile 0.9%Language:Shell 0.3%