gooose09009 / Machine_Learning-RaspberryPI_ObjectDetection_TensorFlow_OpenCV

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Traffic Light Deep Learning Detection using the Raspberry PI, Tensorflow and OpenCV



Requirements

Hardware Components

  1. Raspberry PI 4
  2. RC Car Frame with 3 DC Motors-FWD, RWD, and turning
  3. 2 L298N Modules
  4. HC-SR04 Ultrasonic Sensor
  5. PI Camera
  6. 5V/2.1A Power Bank

Major Tools/Libraries

  1. Rpi GPIO
  2. Open CV
  3. Tensorflow/Tensorflow Lite
  4. Roboflow



Steps

Problem:

Traffic Light Detection and using Computer Vision to start moving


Building the Traffic Light

Circuit Diagram:

screen-recorder-wed-jul-21-2021-09-08-45


Data Collection

Data Collection was done using a webcam

Total Images Collected: 580

Image Preprocessing was done in Roboflow

Total Images for Training: 1500


Training

Model.py creates downloads all dependencies and creates a pipeline.config file that uses .tfrecord files generated by Roboflow

DynamicDetection.py tests the model with a webcam

Pretrained model used: SSD MobileNet V2 FPNLite 320x320 from TF Model Zoo

The best performance was obtained after training for 2000 steps which is ckpt-2

CkptSave.sh is a script that copies checkpoints from training and moves them to another folder whenever they are generated



Hardware Control

MotorControl.py controls the speed and direction of all three motors independenlty

It turns left and right using the left and right arrow keys

Up and Down arrow keys are used to increase and decrease the cycle of the PWM signal to the front and back motors

Pressing the 'f' key moves the car forward, pressing 'r' key makes it move in reverse

Keys 'a' and 's' control the servo where the camera is mounted

Key 'd' measures the distance in front of the car using the HC-SR04 Ultrasonic Sensor



Bash commands

  1. Start Training:

    CUDA_VISIBLE_DEVICES="0" python3 models/research/object_detection/model_main_tf2.py \
    --pipeline_config_path='pipeline_file.config' \
    --model_dir='training/' \
    --alsologtostderr \
    --num_train_steps=5000 \
    --sample_1_of_n_eval_examples=1 \
    --num_eval_steps=100
    
  2. Start Evaluating:

    NOTE: CUDA_VISIBLE_DEVICES="-1" hides the GPU so that evaluation doesn't take away GPU resources from the training

    CUDA_VISIBLE_DEVICES="-1" python3 models/research/object_detection/model_main_tf2.py \
    --pipeline_config_path='pipeline_file.config' \
    --model_dir='training/' \
    --checkpoint_dir=training/ \
    --eval_dir=eval/
    
  3. Track GPU usage:

    nvidia-smi -l 1
    
  4. Track Training and Evaluation with Tensorboard:

    tensorboard --logdir_spec=x:training/train/,y:training/eval/
    
  5. Save model for export:

    python3 models/research/object_detection/exporter_main_v2.py \
    --input_type=image_tensor \
    --pipeline_config_path='pipeline_file.config' \
    --trained_checkpoint_dir=training/ \
    --output_directory=export/
    
  6. Convert to tf-lite

    python3 models/research/object_detection/export_tflite_graph_tf2.py \
    --pipeline_config_path='pipeline_file.config' \
    --trained_checkpoint_dir=export_ckpt/ \
    --output_directory=tflite/
    
    
    tflite_convert --saved_model_dir=tflite/saved_model \
    --output_file=tflite/saved_model/detect.tflite \
    --input_shapes=1,300,300,3 \
    --input_arrays=normalized_input_image_tensor \
    --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
    --inference_type=QUANTIZED_UINT8 \
    --allow_custom_ops
    



Demo

Video of car detecting a green light and moving forward

IMG_1432.mov

Pictures of the car

IMG_1486 IMG_1485 IMG_1484

About


Languages

Language:Python 98.3%Language:Shell 1.7%