There are 7 repositories under manipulation topic.
A Fast & Light Virtual DOM Alternative
Central repository for tools, tutorials, resources, and documentation for robotics simulation in Unity.
Study guides for MIT's 15.003 Data Science Tools
A comprehensive list of Implicit Representations and NeRF papers relating to Robotics/RL domain, including papers, codes, and related websites
A cross-platform and ultrafast toolkit for FASTA/Q file manipulation
Train robotic agents to learn to plan pushing and grasping actions for manipulation with deep reinforcement learning.
[Embodied-AI-Survey-2024] Paper list and projects for Embodied AI
BruteSploit is a collection of method for automated Generate, Bruteforce and Manipulation wordlist with interactive shell. That can be used during a penetration test to enumerate and maybe can be used in CTF for manipulation,combine,transform and permutation some words or file text :p
[IROS 2021] BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D Models
Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.
Code for "Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation"
A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vision, including papers, codes, and related websites
No Maintenance Intended
😎 A curated list of awesome mobile robots study resources based on ROS (including SLAM, odometry and navigation, manipulation)
PyBullet Planning
Stream-based library for parsing and manipulating subtitle files
[IROS 2020] se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains
PDDLStream: Integrating Symbolic Planners and Blackbox Samplers
Isomorphic hyperHTML
[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation
MIT-Princeton Vision Toolbox for Robotic Pick-and-Place at the Amazon Robotics Challenge 2017 - Robotic Grasping and One-shot Recognition of Novel Objects with Deep Learning.
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.
Pytorch code for ICRA'22 paper: "Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation"
paper list of robotic grasping and some related works
Benchmarking Knowledge Transfer in Lifelong Robot Learning
Magic potions to clean and transform your data 🧙