uenian33 / nx100_robotic_tasks

Potential tasks implementation for nx100-remote-control

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Motoman NX100 learning task pipeline

Potential tasks implementation for nx100-remote-control repository NX100 control backend. The tasks designed so far (implemented and to be implemented):

  • Dsiparity estimation from stereo camera.
  • Object grasping pose estimation.
  • Depth-only aware grasping for generalized objects.
  • Semantic visual detections.
  • Map reconstruction.

Table of contents

Updates

  • 2021.26.6: refactored some class implementations
  • 2021.6.6: fixed disparity estimation, two disparity pipeline added for more robust estimation

To-do list

  • Readme documentation
  • Combine previous camera calibration with new disparity pipeline
  • Object sgementation feature extraction
  • Semantic features + depth features for imitation learning

Usage examples

See disparity_estimation and stereo folders correspondingly.

Getting started

Basic steps to try setup and parts.

Activate env

conda activate pytorch

Testing 3D camera feed

To see if 3D camera is available. Should list /dev/video0 maybe /dev/video1 ... maybe others?

ls /dev/video*

Try feed:

vlc v4l2:///dev/video0

Visualize grasping

Meant for testing different objects on table and how camera can see them.

...coming

Test grasping

python ./run_grasp_generator.py

About

Potential tasks implementation for nx100-remote-control


Languages

Language:Jupyter Notebook 96.1%Language:Python 3.8%Language:Makefile 0.1%Language:Shell 0.0%