There are 3 repositories under robot-vision topic.
real-time fire detection in video imagery using a convolutional neural network (deep learning) - from our ICIP 2018 paper (Dunnings / Breckon) + ICMLA 2019 paper (Samarth / Bhowmik / Breckon)
Simultaneous Enhancement and Super-Resolution. #RSS2020
Codes of paper "Unseen Object Amodal Instance Segmentation via Hierarchical Occlusion Modeling", ICRA 2022
[IEEE RAL'24 & IROS'24] Mobile-Seed: Joint Semantic Segmentation and Boundary Detection for Mobile Robots
List of commonly used robotics libraries and packages
This repository include implementation of calibrating intrinsic and extrinsic camera parameter for distance calculation
Saliency-guided Visual Attention Modeling. #RSS2022 #SOD #RobotVision
Attempting to create a program capable of combining stereo video input , with motors and other sensors on a PC running linux , the target is embedded linux for use in a robot!
Halcon using and programming all in one.
Official project website for the AAAI 2022 paper "Stereo Neural Vernier Caliper"
ICRA 2020 papers focusing on point cloud analysis
这是关于Intel-Realsense 深度相机的一些简单的案例,希望对您有所帮助!
FLS point cloud registration library.
An autonomous RGB-D camera for robot vision training. It creates and modifies environments in Unreal Engine. Both RGB-D and annotation data are captured and published via TCP.
Python and Gazebo-ROS implementation of Image Quality Metric to evaluate the quality of image for robust robot vision.
ROS wrapper for the OpenPose 1.5.0
fully copied comp disk image
Labview code for Team 60 2019 FRC
An Unreal Engine to ROS bridge for AutonomousRGBDCamera. Both RGB-D and annotation data are extracted and published as ROS topics.
Library and test application for tracking Color Blobs. Uses OpenCV and Qt.
Embedded Robotic System College Project
Robot Vision Mini-Project for a course at Aalborg University.
【Robot vision】 made easier Pixy2 is smaller, faster and more capable than the original Pixy. Like its predecessor, Pixy2 can learn to detect objects that you teach it, just by pressing a button. Additionally, Pixy2 has new algorithms that detect and track lines for use with line-following robots. The new algorithms can detect intersections and “road signs” as well. The road signs can tell your robot what to do, such as turn left, turn right, slow down, etc. And Pixy2 does all of this at 60 frames-per-second, so your robot can be fast, too.
Collect and implment some techniques in computer vision for robot