- Raspberry PI 4
- RC Car Frame with 3 DC Motors-FWD, RWD, and turning
- 2 L298N Modules
- HC-SR04 Ultrasonic Sensor
- PI Camera
- 5V/2.1A Power Bank
- Rpi GPIO
- Open CV
- Tensorflow/Tensorflow Lite
- Roboflow
Traffic Light Detection and using Computer Vision to start moving
Circuit Diagram:
Data Collection was done using a webcam
Total Images Collected: 580
Image Preprocessing was done in Roboflow
Total Images for Training: 1500
Model.py creates downloads all dependencies and creates a pipeline.config file that uses .tfrecord files generated by Roboflow
DynamicDetection.py tests the model with a webcam
Pretrained model used: SSD MobileNet V2 FPNLite 320x320 from TF Model Zoo
The best performance was obtained after training for 2000 steps which is ckpt-2
CkptSave.sh is a script that copies checkpoints from training and moves them to another folder whenever they are generated
MotorControl.py controls the speed and direction of all three motors independenlty
It turns left and right using the left and right arrow keys
Up and Down arrow keys are used to increase and decrease the cycle of the PWM signal to the front and back motors
Pressing the 'f' key moves the car forward, pressing 'r' key makes it move in reverse
Keys 'a' and 's' control the servo where the camera is mounted
Key 'd' measures the distance in front of the car using the HC-SR04 Ultrasonic Sensor
- Start Training:
CUDA_VISIBLE_DEVICES="0" python3 models/research/object_detection/model_main_tf2.py \ --pipeline_config_path='pipeline_file.config' \ --model_dir='training/' \ --alsologtostderr \ --num_train_steps=5000 \ --sample_1_of_n_eval_examples=1 \ --num_eval_steps=100
- Start Evaluating:
NOTE: CUDA_VISIBLE_DEVICES="-1" hides the GPU so that evaluation doesn't take away GPU resources from the training
CUDA_VISIBLE_DEVICES="-1" python3 models/research/object_detection/model_main_tf2.py \ --pipeline_config_path='pipeline_file.config' \ --model_dir='training/' \ --checkpoint_dir=training/ \ --eval_dir=eval/
- Track GPU usage:
nvidia-smi -l 1
- Track Training and Evaluation with Tensorboard:
tensorboard --logdir_spec=x:training/train/,y:training/eval/
- Save model for export:
python3 models/research/object_detection/exporter_main_v2.py \ --input_type=image_tensor \ --pipeline_config_path='pipeline_file.config' \ --trained_checkpoint_dir=training/ \ --output_directory=export/
- Convert to tf-lite
python3 models/research/object_detection/export_tflite_graph_tf2.py \ --pipeline_config_path='pipeline_file.config' \ --trained_checkpoint_dir=export_ckpt/ \ --output_directory=tflite/ tflite_convert --saved_model_dir=tflite/saved_model \ --output_file=tflite/saved_model/detect.tflite \ --input_shapes=1,300,300,3 \ --input_arrays=normalized_input_image_tensor \ --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \ --inference_type=QUANTIZED_UINT8 \ --allow_custom_ops