This project utilizes the DJI Tello drone and computer vision to implement pose recognition and tracking, enabling maneuvering through specific hand gestures. It's designed to allow users to interact with the drone using intuitive movements, enhancing the user experience and broadening the drone's practical applications.
- Pose Recognition: Utilizes computer vision algorithms to recognize and interpret human poses in real-time.
- Gesture-Based Maneuvering: Translate specific hand gestures into drone commands for seamless flight control.
- Testing Environment: Includes test folders to evaluate the algorithm on camera without needing the drone, ensuring development and testing flexibility.
- Ready-to-Go Code: Code for immediate application and deployment with the DJI Tello drone.
pip install -r requirements.txt
Clone the repository to get started with the drone pose recognition and maneuvering:
git clone https://github.com/AfonsoSCCarvalho/AI_Drone.git
cd AI_Drone
To start the pose recognition and maneuvering, choose one of the scripts prefixed with 1.X. For instance, script 1.2 allows for real-time video recording of the program, though it performs slower.
Ensure your environment is properly set up with a camera and that your DJI Tello drone is configured according to the manufacturer's specifications.
Feel free to fork the repository, make improvements, and submit pull requests. I would be excited to see community contributions that enhance the functionality and scope of this project.
This project is licensed under the MIT License - see the LICENSE file for details.