rstallman / RVC3-python

Code examples for Robotics, Vision & Control 3rd edition in Python

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Robotics, Vision & Control: 3rd edition in Python (2023)

A Python Robotics Package QUT Centre for Robotics Open Source License: MIT

PyPI version PyPI - Python Version PyPI - Downloads

Front cover 978-3-031-06468-5_5208

This book depends on the following open-source Python packages

Robotics Toolbox for Python Machine Vision Toolbox for Python Block diagram simulation for Python

which in turn have dependencies on other packages created by the author and third parties.

Installing the package

This package provides a simple one-step installation of the required Toolboxes

pip install rvc3python

or

conda install rvc3python

There are a lot of dependencies and this might take a minute or so. You now have a very powerful computing environment for robotics and computer vision.

Python version

Given the rapid rate of language additions, particularly around type hinting, use at least Python 3.8. Python 3.7 goes end of life in June 2023.

Not all package dependencies will work with the latest release of Python. In particular, check:

  • PyTorch used for segmentation examples in Chapter 12
  • Open3D, used for point cloud examples in Chapter 14.

Installing into a Conda environment

It's probably a good idea to create a virtual environment to keep this package and its dependencies separated from your other Python code and projects. If you've never used virtual environments before this might be a good time to start, and it is really easy using Conda:

conda create -n RVC3 python=3.10
conda activate RVC3
pip install rvc3python

Installing deep learning tools

Chapter 11 has some deep learning examples based on PyTorch. If you don't have PyTorch installed you can use the pytorch install option

pip install rvc3python[pytorch]

or

conda install rvc3python[pytorch]

Using the Toolboxes

The simplest way to get going is to use the command line tool

$ rvctool
 ____       _           _   _             __     ___     _                ___      ____            _             _   _____ 
|  _ \ ___ | |__   ___ | |_(_) ___ ___    \ \   / (_)___(_) ___  _ __    ( _ )    / ___|___  _ __ | |_ _ __ ___ | | |___ / 
| |_) / _ \| '_ \ / _ \| __| |/ __/ __|    \ \ / /| / __| |/ _ \| '_ \   / _ \/\ | |   / _ \| '_ \| __| '__/ _ \| |   |_ \ 
|  _ < (_) | |_) | (_) | |_| | (__\__ \_    \ V / | \__ \ | (_) | | | | | (_>  < | |__| (_) | | | | |_| | | (_) | |  ___) |
|_| \_\___/|_.__/ \___/ \__|_|\___|___( )    \_/  |_|___/_|\___/|_| |_|  \___/\/  \____\___/|_| |_|\__|_|  \___/|_| |____/ 
                                      |/                                                                                   
for Python (RTB==1.0.2, MVTB==0.9.1, SMTB==1.0.0)

import numpy as np
from scipy import linalg, optimize
import matplotlib.pyplot as plt
from math import pi
from spatialmath import *
from spatialmath.base import *
from roboticstoolbox import *
from machinevisiontoolbox import *
import machinevisiontoolbox.base as mvbase

func/object?       - show brief help
help(func/object)  - show detailed help
func/object??      - show source code


Results of assignments will be displayed, use trailing ; to suppress

 
Python 3.8.5 (default, Sep  4 2020, 02:22:02) 
Type 'copyright', 'credits' or 'license' for more information
IPython 8.0.1 -- An enhanced Interactive Python. Type '?' for help.


>>> 

This provides an interactive Python (IPython) session with all the Toolboxes and supporting packages imported, and ready to go. It's a highly capable, convenient, and "MATLAB-like" workbench environment for robotics and computer vision.

For example to load an ETS model of a Panda robot, solve a forward kinematics and inverse kinematics problem, and an interactive graphical display is simply:

>>> panda = models.ETS.Panda()
ERobot: Panda (by Franka Emika), 7 joints (RRRRRRR)
┌─────┬───────┬───────┬────────┬─────────────────────────────────────────────┐
│linklinkjointparentETS: parent to link             │
├─────┼───────┼───────┼────────┼─────────────────────────────────────────────┤
│   0link00BASEtz(0.333) ⊕ Rz(q0)                          │
│   1link11link0Rx(-90°) ⊕ Rz(q1)                           │
│   2link22link1Rx(90°) ⊕ tz(0.316) ⊕ Rz(q2)                │
│   3link33link2tx(0.0825) ⊕ Rx(90°) ⊕ Rz(q3)               │
│   4link44link3tx(-0.0825) ⊕ Rx(-90°) ⊕ tz(0.384) ⊕ Rz(q4) │
│   5link55link4Rx(90°) ⊕ Rz(q5)                            │
│   6link66link5tx(0.088) ⊕ Rx(90°) ⊕ tz(0.107) ⊕ Rz(q6)    │
│   7 │ @ee   │       │ link6tz(0.103) ⊕ Rz(-45°)                        │
└─────┴───────┴───────┴────────┴─────────────────────────────────────────────┘

┌─────┬─────┬────────┬─────┬───────┬─────┬───────┬──────┐
│nameq0q1q2q3q4q5q6   │
├─────┼─────┼────────┼─────┼───────┼─────┼───────┼──────┤
│  qr0° │ -17.2° │  0° │ -126° │  0° │  115° │  45° │
│  qz0° │  0°    │  0° │  0°   │  0° │  0°   │  0°  │
└─────┴─────┴────────┴─────┴───────┴─────┴───────┴──────┘

>>> panda.fkine(panda.qz)
   0.7071    0.7071    0         0.088     
   0.7071   -0.7071    0         0         
   0         0        -1         0.823     
   0         0         0         1      
>>> panda.ikine_LM(SE3.Trans(0.4, 0.5, 0.2) * SE3.Ry(pi/2))
IKSolution(q=array([  -1.849,   -2.576,   -2.914,     1.22,   -1.587,    2.056,   -1.013]), success=True, iterations=13, searches=1, residual=3.3549072615799585e-10, reason='Success')
>>> panda.teach(panda.qz)

Computer vision is just as easy. For example, we can import an image, blur it and display it alongside the original

>>> mona = Image.Read("monalisa.png")
>>> Image.Hstack([mona, mona.smooth(sigma=5)]).disp()

or load two images of the same scene, compute SIFT features and display putative matches

>>> sf1 = Image.Read("eiffel-1.png", mono=True).SIFT()
>>> sf2 = Image.Read("eiffel-2.png", mono=True).SIFT()
>>> matches = sf1.match(sf2)
>>> matches.subset(100).plot("w")

rvctool is a wrapper around IPython where:

  • robotics and vision functions and classes can be accessed without needing package prefixes
  • results are displayed by default like MATLAB does, and like MATLAB you need to put a semicolon on the end of the line to prevent this
  • the prompt is the standard Python REPL prompt >>> rather than the IPython prompt, this can be overridden by a command-line switch
  • allows cutting and pasting in lines from the book, and prompt characters are ignored

The Robotics, Vision & Control book uses rvctool for all the included examples.

rvctool imports the all the above mentioned packages using import * which is not considered best Python practice. It is very convenient for interactive experimentation, but in your own code you can handle the imports as you see fit.

Other command line tools

This package provides additional command line tools including:

  • eigdemo, animation showing linear transformation of a rotating unit vector which demonstrates eigenvalues and eigenvectors.
  • tripleangledemo, experiment with various triple-angle sequences.
  • twistdemo, experiment with 3D twists.

Block diagram models

bdsim logo

Block diagram models are key to the pedagogy of the RVC3 book and 25 models are included. To simulate these models we use the Python package bdsim which can run models:

  • written in Python using bdsim blocks and wiring.
  • created graphically using bdedit and saved as a .bd (JSON format) file.

The models are included in the RVC3 package when it is installed and rvctool adds them to the module search path. This means you can invoke them from rvctool by

>>> %run -m vloop_test

If you want to directly access the folder containing the models, the command line tool

bdsim_path

will display the full path to where they have been installed in the Python package tree.

Additional book resources

Front cover 978-3-031-06468-5_5208

This GitHub repo provides additional resources for readers including:

  • Jupyter notebooks containing all code lines from each chapter, see the notebooks folder
  • The code to produce every Python/Matplotlib (2D) figure in the book, see the figures folder
  • 3D points clouds from chapter 14, and the code to create them, see the pointclouds folder.
  • 3D figures from chapters 2-3, 7-9, and the code to create them, see the 3dfigures folder.
  • All example scripts, see the examples folder.
  • To run the visual odometry example in Sect. 14.8.3 you need to download two image sequence, each over 100MB, see the instructions here.

To get this material you must clone the repo

git clone https://github.com/petercorke/RVC3-python.git

About

Code examples for Robotics, Vision & Control 3rd edition in Python

License:MIT License


Languages

Language:Jupyter Notebook 77.0%Language:Python 22.9%Language:Makefile 0.0%Language:Shell 0.0%Language:CSS 0.0%