bcornelis / CarND-Capstone

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Project Explanation

The following changes have been applied to the codebase:

dbw_node.py

The drive-by-wire node is responsible for using data from the /twist_cmd, /current_velocity and /vehicle/dbw_enabled topics to control the brake, steering and throttle. Those are controlled by sending proper messages to the specific topics. The node subscribes to the required topics, and the callbacks just store the information passed in the callbacks message attribute. A twist controller is used in the loop to fetch the proper values for throttle, brake and steer and if not in manual mode (dbw_enabled) those values are published to the proper topics. The twist controller is initialized with default values provided by configuration values.

twist_controller.py

The implementation of the twist controller uses the following provided implementations:

  • YawController (yaw_controller property): controller used to find the steering value
  • PID (throttle_pid property): PID controller used to find the proper throttle value

The generated values:

  • throttle: using the PID controller with the current velocity error and the time delta (difference between the previous time and the current time)
  • brake: if the PID controller generates a negative value, this is used as the break value
  • steer: the value returned by the yaw controller

If the throttle value is positive, the brake value is set to 0. If the throttle value is negative, throttle is set to 0 and brake is set to the negative throttle value.

There's a slight modifications to the brake value: some properties of the car (mass, fuel capacity and wheel radius) are used to generate a more realistic break value.

waypoint_updater.py

The trigger is the pose_cb callback function. This method is called every time a new car position is consumed from the /current_pose topic. Whenever a message is received, the send_final_waypoints method is called. The logic of this method is:

  • receive the waypoint closest to the current car position using the get_next_waypoint_index method
  • generate a new array of waypoints in front of the car. The first waypoint is the one of the current cars location. All next waypoints, up to LOOKAHEAD_WPS are included in this array
  • set all the velocities of the nodes to the max values (The PID controller handles acceleration and brake; so no problem with max values)
  • if there is a red light detected in the next LOOKAHEAD_WPS number of waypoints, linearly decrease the velocity of the nodes starting at the current one, till the one representing the light index, so it ends at 0.
  • create a Lane object, representing the waypoints, and publish them to the /final_waypoints topic

To find the closest waypoint to a specific location (current cars position, light position, ...) the get_next_waypoint_index method is implemented. This will iterate over all waypoints, and finds the one with the closest distance in front of the car.

tl_detector.py

The most important update in this class is the get_light_state: this method will iterate over the stop line positions in the stop_line_positions array, and find the closest one. If it's in the front of the car, the get_light_state method is used to find the state of the light (red or not). To find the state, the previous classifier is used.

There are 3 different implementations for the tl_classifier (check tl_detector.py):

  • LIGHT_CLASSIFIER_TOPIC: Implemented in tl_classifier_topic.py. This implementation is used for testing, and listens on a specific topic. The message type of the topic is an Int32 representing the light state the next light should be. Check the class description for more information about how it should be used
  • LIGHT_CLASSIFIER_TFMODELAPI: Implemented in tl_classifier.py. This implementation uses the TensorFlow Object Detection API to check the light state. More information can be found in README.
  • LIGHT_CLASSIFIER_OPENCV: Implemented in tl_classifier_opencv.py.In this implementation, OpenCV is used to check if red lights are available. Logic explained in https://solarianprogrammer.com/2015/05/08/detect-red-circles-image-using-opencv/ is used to check if there are any red lights in the image.

By default, the LIGHT_CLASSIFIER_OPENCV classifier is enabled

Improvements:

  • closest distance to the car can be optimised much more

This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.

Please use one of the two installation options, either native or docker installation.

Native Installation

  • Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.

  • If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:

    • 2 CPU
    • 2 GB system memory
    • 25 GB of free hard drive space

    The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.

  • Follow these instructions to install ROS

  • Dataspeed DBW

  • Download the Udacity Simulator.

Docker Installation

Install Docker

Build the docker container

docker build . -t capstone

Run the docker file

docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone

Port Forwarding

To set up port forwarding, please refer to the instructions from term 2

Usage

  1. Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
  1. Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
  1. Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
  1. Run the simulator

Real world testing

  1. Download training bag that was recorded on the Udacity self-driving car (a bag demonstraing the correct predictions in autonomous mode can be found here)
  2. Unzip the file
unzip traffic_light_bag_files.zip
  1. Play the bag file
rosbag play -l traffic_light_bag_files/loop_with_traffic_light.bag
  1. Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
  1. Confirm that traffic light detection works on real life images

About


Languages

Language:CMake 36.4%Language:Python 33.1%Language:C++ 23.9%Language:Jupyter Notebook 6.3%Language:Shell 0.3%