lmqZach / JetsonNano_Roomba

Develope a Roomba-like system by integrating localization and motion planner to achieve max coverage.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

JetsonNano_Roomba

File Structure

This projects shares the same ROS Jetson configuration root file from: https://github.com/lmqZach/JetsonNano_PoseEstimation/blob/master/README.md

The new algorithm for this progress file resides under root/navigation_dev/src

|-- ROOT
  |-- README.md
  |-- CMakeLists.txt
  |-- init.sh
  |-- jetbot_ROS
  |   |-- ```
  |-- jetson-inference
  |   |-- ```
  |-- navigation_dev
  |   |-- CMakeLists.txt
  |   |-- package.xml
  |   |-- launch
  |   |-- msg
  |   |-- src
  |          |-- april_detect.py
  |          |-- localization_node.py
  |          |-- planner_node.py
  |-- ros_deep_learning
      |-- ```

Objective

The objective of the final phase of the project is to design a “roomba” like system. The robot should be able to navigate an environment and provide a level of coverage of the area.

Detailed Tasks

  1. Setup an environment in a 10ft ×10ft area with landmarks at the edges as show in the figure. The landmarks could be AprilTags markers as used before. Measure the position of your landmarks / walls

Screen Shot 2022-05-09 at 15 53 41

  1. Use localization systems from prior phase of project to ensure the robot has a certain level of situation awareness.

  2. Describe the behaviors that are needed to provide coverage / avoidance.

  3. Implement a basic version of the system using ROS or Python.

  4. Provide a diagram that explains the control flow in your system.

  5. Demonstrate the performance of the system with a video / graphical illustration of the trajectories generated by the system.

Report

Logistic:

The projective of this assignment is to design an algorithm that allows the Jetbot sweep through the environment with self-guided coverage and avoidance. Balance between the theoretical maximum coverage and the actual consistence coverage, is the main factor, that leads the team to Grid Method and DFS (Depth First Search). The general design architecture is shown in Figure 1.

image

Figure 1: Architecture Diagram

Localization:

World Coordinate:

Our approach of localization in this assignment was derived from the one used in the last project. With the knowledge of environment obstacles/ boundaries represented by 12 April tags, every tag observed in a frame is recognized as a specific one in the map. Thus, our group computes its Euclidean distance in world coordinate (this is done by using the previous frame's robot position and current frame's relative tag coordination) to all tags in the map that belong to the same category. Then we assign an observation to the tag with a minimum distance. After we have all ground-truth values of tag coordinates, we can then compute the robot’s world coordinate and orientation by averaging over all observations.

Map and Grid Diagram:

The Jetbot camara has its limitations that tag detections are unavailable or inaccurate when the Jetbot is very close (around 0.2-0.3m) to April tag. Due to the lack of another form of sensing or estimating distance in this scenario, the above localization method requires us to ‘shrink’ the actual map into a detectable range by 0.5m padding the boundary. Our grid now generates an inner map of 10x10 arrangement, with each cell in 2-D plane of dimension 0.2x0.2m. We decided to leave each outmost cell to represent the boundary of environment, by marking it ‘1’ in the code. The boundary now has an imaginary thickness of 0.5m. Each cell can be transformed into potential waypoints by ‘trans2waypoint’ function.

Planning:

In the initial status, only the boundary and start point are marked by ‘1’, while all other cells marked by ‘0’ are the waypoints that we would like to cover. With this being defined, the path planning is done by implementing Depth First Search, which is implemented by the ‘next’ function in our code. The function lists four surround cells around the current cell in a sorted order of ‘right – up – left – down’, and then it outputs the next unvisited point in the order. This approach allows us to generate a snakelike path on the among cells (show in Figure 2), which is later transformed into waypoints. Simultaneously, each visited cell is marked as ‘1’ in the grid.

Execution:

The general skeleton of this assignment, including tag differentiation, localization, waypoints follow and motion, are also used in prior assignment. The main different parts are the path planning, grid generation and conversion. Our assumption is that the area that needs to be covered is the smaller map after ‘padding’ the actual map boundary by 0.5m. Then, with our path implementation, the performance and limitation will be discussed below. Upon receiving tag information, we first calculate the current robot position and orientation in world frame (localization) as done in previous homework. We then plan for the next waypoint (i.e., where to go) given current robot position and orientation. This is done in two parts: first, we determine whether the robot has reached the last designated waypoint or not. If the answer is true, we project our path planning problem into the grid space and use DFS to search for the next cell that is not yet covered and valid to move to. Once we found the cell, we then project it back to the world frame.

Performance and Limitations:

With grid method being implemented, our algorithm can theoretically cover 100 cells inside of the padded. Our 4 actual runs show stable and consistent path and coverage in approximately same amount of time which are great advantages. The Jetbot covers the full area of the shrank map (marked by red lines in the video). The Jetbot has a 2D dimension of 14x17cm which is very close to our assumed size of 0.2x0.2m, under which the coverage should be very close to 100%. The limitations exist in two categories: the robot’s motor limit versus the density of cells; errors brought by camara inaccuracy. Denser cells will directly reduce the turn radius in each ‘snake’ turn that is also limited by the Jetbot’s differential turning model. On the other hand, either the motor output nor the relative tag locations from camara output would be completely accurate, which leads to localization error and path variations.

Potential Improvements:

One possible improvement to tackle the hardware inaccuracy is to execute our planning and motion model repeatedly. For example, Jetbot is coded to stack the path inversely from first time and continuously repeating. However, this method has very little guarantee of an exact degree of improvement, but it may help increase the coverage solely. In addition, correct EKF application may help reduce localization error and potentially quantify the error into a range. Due to the fact our prior EKF wasn’t too effective, this method was considered this time. The other improvement is to implement a denser grid with a more complicated motion model to a specific waypoint. Theoretically, when the map is divided small enough, there should be full coverage. However, any size of the grid needs to match the motion model, otherwise the trajectory won’t even be complete.

Graphs:

image

Figure 2: Diagram of Path (in Grid Representation)

Demostration Video Link:

https://www.youtube.com/watch?v=8HUxgY_jkmQ

Map Sketch

Screen Shot 2022-05-09 at 16 00 53

About

Develope a Roomba-like system by integrating localization and motion planner to achieve max coverage.


Languages

Language:CMake 65.9%Language:Python 22.1%Language:Dockerfile 8.3%Language:Shell 3.7%