This Project is to have a Robot Arm to do sorting mechanisms for things using darknet with addition of Digital Twins
Idea: Sorting socks by its color using robot arm from conveyor system Software Tools:
- YOLOv4 Neural Networks model - Prediction of socks color
- ROS (Robotics Operating System) - Controlling robot arm and other apperatus
- MoveIt tool - For path planning and Kinematical definitions & Visualisation
- Introduction
- Requirements for Raspbeery pi (Controlling robot)
- Requirements for Ros Master PC (Path Planning)
- Requirements for ML work stations (prediction of socks yolov4 model) for RTX3060
- Our own custom Dataset
- Procedures for building up Yolo Architecture
- Procedures for building up Moveit architecture
- Procedures for robot raspbeery pi
- Running in Docker Container (For Niryo Ned Robot)
- Demo
Sorting is huge task in as well as in homes and industries this project is about making an opensource diy socks sorting robot where it can predict the color of socks either black and white and grab it using opencv and does requried path planning using "Moveit"- a ros noetic plugin. and place it in the correct box. The Entire Architecture of would be explained in this image below:
- Bare (Flashed) Ubuntu OS on Raspberry pi 3/4 (terminal version)
- Ros noetic for raspberry: link
- Required dependencies for running PCA9685: link
- sanity check using builit server-receiver nodes
- Perfect wiring of hardwares
After this please make the asssembly of diy robot from "Joy-It" link: https://joy-it.net/en/products/Robot02
- newly flashed ubuntu 20.04 (with GUI version)
- For ROS: Noetic/Installation/Ubuntu - ROS Wiki
- For Moveit: MoveIt Tutorials - Installation
- sanity check using builit server-receiver nodes
- Cmake>=3.18
- opencv
- NvidiaGPU Drivers 470.42.01
- cuda version 11.4
- nvcc
- yolov4 architecture
https://www.kaggle.com/datasets/harigovindasamy/socks-color-dataset-white-and-black
- Disable "Secure - Boot" by the command: source
sudo mokutil --disable-validation
- Check your secure boot status (Enabled/Disabled) through this command:
mokutil --sb-state
which need this lib:sudo apt-get install mokutil
- Check your secure boot status (Enabled/Disabled) through this command:
- Check all the prerequiremens process from the nividia recommended page
- GUI-Version: Downalod and install from ubuntu store cmake>=3.18
- Inbuild Version>18.0: Follow this link
- make sure of installation of right opencv, drivers, cuda, nvcc
- pull updated version of yolov4 from: https://github.com/AlexeyAB
- copy our custom pretrained models from our folder "scripts/socks_model/Robot" to yolo directory
- Build using the make file for further information about building visit: https://github.com/AlexeyAB
- copy the required scripts "scripts/for_ml_inference" to the workstation
Our socks prediction model is ready now. so we can move to Ros master path planning
- first run this:
# For enabling each type:
sudo add-apt-repository universe
sudo add-apt-repository multiverse
sudo add-apt-repository restricted
- copy our "urdf_and_mesh_models_for_moveit" to ROS master PC to ubuntu ROS workspace eg: "home/username/catkin_workspace/src/"
- run
roslaunch moveit_setup_assistance setup_assistance.launch
- then select the urdf file from "urdf_and_mesh_models_for_moveit/Aura_robot/urdf/Aura_robot.urdf", the meshes will be loaded and robot will be visible on right side
- build the moveit architecutre (naming convection should be neat and constant for whole process)
- copy requred codes from "Robot-Arm-for-Sorting-Mechanism-using-ROS-and-YOLOv4\scripts\for_master" to catkin workspace then build it
The Workflow of how moveit works from urdf is explained here:
- sanity check of all dependencies installed aready for the PCA9685 hardwares and electrical connections
- copy "Robot-Arm-for-Sorting-Mechanism-using-ROS-and-YOLOv4\scripts\robot_pi" to raspberry pi
Here we have created the docker containers for the models scripts and the all dependencies to run this architecture in the docker container $ xhost local:docker $ docker build . -t <name_for_docker_container> $ docker run -it --rm --privileged --env="DISPLAY" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" --device="/dev/video0:/dev/video0" cdl:socks_storing2 python3 inference/only_camera_inference.py
#ip of master: 192.168.0.136
#ip of pi: 192.168.0.162
in Master:
export ROS_MASTER_URI=http://192.168.0.136:11311
export ROS_IP=192.168.0.136
inslave:
export ROS_MASTER_URI=http://192.168.0.136:11311
export ROS_IP=192.168.0.162
#Cheking if connection sucessful:
on master:
rosrun rospy_tutorials talker.py
output: <some msg>
on slave:
rosrun rospy_tutorials listener.py
output: <received msg>
## Custom added
source /opt/ros/noetic/setup.bash
source ~/catkin_tutorials/devel/setup.bash
export ROS_WORKSPACE=~/catkin_tutorials
#connecting device to the ros
export ROS_MASTER_URI=http://192.168.0.136:11311
export ROS_IP=192.168.0.162
Starting the process:
- Register all ros devices in new wifi: (use 'sudo nano ~/.bashrc') [ADD THESE LINES IN .BASHRC]
on master:
- run roscore
- run
on slave: 1.
The output of the image/camera feed would be like this: