We are working under the Perceptions and Robotics Group (PRG) at UMD, to use OpenAI's ChatGPT for applications in robotics. We are working on creating a high level function library which can be controlled by ChatGPT and can then be used to undertake several complex tasks which would otherwise require human intervention. Currently an ongoing project.
- Note : Add git lfs tracking in the .gitattributes file or via the terminal if python files are bigger than 50MB
- Note : Use this google drive link to download the blender_data folder to avoid lfs
- 2D bounding Box of objects from Blender
2.93
- Integrate IMU with blender
- Integrate LiDAR/SONAR with blender
- Train yolo on the generated data from blender
- Rover position data with detections on PCL
- colab notebook used to train the yolov4-tiny, find it here
- Modified the colab notebook provided here
- We trained a yoloV4-tiny on a dataset of around 5000 images
- Download the model best weights file from here
- Copy the model weights in here
The api library functions are written in chat_script/func.py
file
- get_bot_position() - Returns position of the robot in the form of a tuple containing x,y,z coordinates called points.
- get_position(obj_name) - Returns position in the form of points of any object whose name is passed to the function.
- set_bot_motion(points) - Moves the robot to those set of points at a certain time in the future.
- set_yaw(angle) - Sets the yaw angle for the bot.
- set_pitch(angle) - Sets the pitch angle for the bot.
- set_roll(angle) - Sets the roll angle for the bot.