robotsorcerer / soft-neuro-adapt

Source Codes for reproducing the results in the paper: https://arxiv.org/abs/1703.03821

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Intro

Source codes for my IROS paper: A 3-DoF Neuro-Adaptive Pose Correcting System For Frameless and Maskless Cancer Radiotherapy. Arxiv ID: arXiv:1703.03821

Dependencies

The neural network estimator runs in pytorch. If you have a ROS distro installation that uses Python 2.7 and you want to keep your native pythonpath, the best way to change your python version without messing stuff up is to install, for example a python3.5+ with conda and activate the py35+ environment whenever you use this code.

To install e.g. a python 3.5+ environment around your base python skin, do

	conda create -n py35 python=3.5 anaconda

To activate the conda 3.5 environment, do:

# > source activate py35

To deactivate this environment, use:

# > source deactivate py35

Core dependencies

  • python 2.7+

Install with

	sudo apt-get install python-dev
  • pytorch

The pytorch version of this code only works on a gpu. Tested with cuda 8.0 and cuda run time version 367.49. Install with

Termnal x:$ conda install pytorch torchvision cuda80 -c soumith 

  • ros

Instructions for installing ros can be found [on the osrf ros website]](http://wiki.ros.org/indigo/Installation/Ubuntu).

  • PyPI dependencies

PyPI installable using:

 Terminal:$	pip install -r requirements.txt 

Vision processing

You could use the vicon system or the ensenso package. Follow these steps to get up and running

If you did not clone the package recursively as shown above, you can initialize the ensenso and vicon submodules as follows:

  • Initialize submodules: git init submodules
  • Update submodules: git submodules update

Then compile the codebase with either catkin_make or catkin build.

The Vicon System Option

cd inside the vicon directory and follow the readme instructions there. With the vicon system, you get a more accurate representation of the face. We would want four markers on the face in a rhombic manner (preferrably named fore, left , right, and chin to conform with the direction cosines code. With the vicon icp code, which is what we run by default, you would not need this. These two codes extract the facial pose with respect to the scene); make sure the subject and segment are appropriately named Superdude/head in Nexus.

 Terminal$:	rosrun vicon_bridge vicon.launch

This launches the adaptive model-following control algorithm, icp computation of head rotation about the table frame and the vicon ros subscriber node.

The Ensenso option.

cd inside the ensenso package and follow the README instructions therein. When done, run the controller and sensor face pose collector nodes as

Running the code

	 roslaunch nn_controller controller.launch

The pose tuple of the face, {z, pitch, roll}, is broadcast on the topic /mannequine_head/pose.

The reference pose that the head should track is located in the traj.yaml file. Amend this as you wish.

Neural Network Function Aproximator

Previously written in Torch7 as the farnn package, this portion of the codebase has been migrated to pyrnn in the recently released pytorch deep nets framework to take advantage of python libraries, cvx and quadratic convex programming for contraints-based adaptive quadratic programming.

farnn

Running `farnn` would consist of `roscd ing` into the `farnn src` folder and running `th real_time_predictor.lua` command while the [nn_controller](/nn_controller) is running).

pyrnn

`roscd` into `pyrnn src` folder and do `./main.py`

About

Source Codes for reproducing the results in the paper: https://arxiv.org/abs/1703.03821

License:MIT License


Languages

Language:Lua 36.6%Language:C++ 29.7%Language:Python 24.8%Language:CMake 4.9%Language:Jupyter Notebook 2.9%Language:Shell 1.2%