pirobot / rbx2

ROS By Example Volume 2

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does it work with ros kinetic ?

Abduoit opened this issue · comments

I cloned the kinetic branch and did catkin_make properly, but I get the following error when I run roslaunch rbx2_vision usb_cam.launch

Any help please ??

abdulrahman@abdulrahman-ThinkPad-X230-Tablet:~$ roslaunch rbx2_vision usb_cam.launch 
... logging to /home/abdulrahman/.ros/log/ec95c2a4-b01d-11e7-b674-e09d31097f54/roslaunch-abdulrahman-ThinkPad-X230-Tablet-4871.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://abdulrahman-ThinkPad-X230-Tablet:33187/

SUMMARY
========

CLEAR PARAMETERS
 * /usb_cam/

PARAMETERS
 * /rosdistro: kinetic
 * /rosversion: 1.12.7
 * /usb_cam/autofocus: True
 * /usb_cam/brightness: 32
 * /usb_cam/camera_frame_id: camera_link
 * /usb_cam/contrast: 32
 * /usb_cam/framerate: 30
 * /usb_cam/image_height: 480
 * /usb_cam/image_width: 640
 * /usb_cam/pixel_format: mjpeg
 * /usb_cam/saturation: 32
 * /usb_cam/video_device: /dev/video0

NODES
  /
    usb_cam (usb_cam/usb_cam_node)

auto-starting new master
process[master]: started with pid [4882]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to ec95c2a4-b01d-11e7-b674-e09d31097f54
process[rosout-1]: started with pid [4895]
started core service [/rosout]
ERROR: cannot launch node of type [usb_cam/usb_cam_node]: usb_cam
ROS path [0]=/opt/ros/kinetic/share/ros
ROS path [1]=/home/abdulrahman/catkin_ws/src
ROS path [2]=/opt/ros/kinetic/share

after installing usb-cam I get the following output

abdulrahman@abdulrahman-ThinkPad-X230-Tablet:~$ roslaunch rbx2_vision usb_cam.launch 
... logging to /home/abdulrahman/.ros/log/0c3ed652-b025-11e7-b674-e09d31097f54/roslaunch-abdulrahman-ThinkPad-X230-Tablet-21137.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://abdulrahman-ThinkPad-X230-Tablet:42769/

SUMMARY
========

CLEAR PARAMETERS
 * /usb_cam/

PARAMETERS
 * /rosdistro: kinetic
 * /rosversion: 1.12.7
 * /usb_cam/autofocus: True
 * /usb_cam/brightness: 32
 * /usb_cam/camera_frame_id: camera_link
 * /usb_cam/contrast: 32
 * /usb_cam/framerate: 30
 * /usb_cam/image_height: 480
 * /usb_cam/image_width: 640
 * /usb_cam/pixel_format: mjpeg
 * /usb_cam/saturation: 32
 * /usb_cam/video_device: /dev/video0

NODES
  /
    usb_cam (usb_cam/usb_cam_node)

auto-starting new master
process[master]: started with pid [21148]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to 0c3ed652-b025-11e7-b674-e09d31097f54
process[rosout-1]: started with pid [21163]
started core service [/rosout]
process[usb_cam-2]: started with pid [21177]
[ INFO] [1507905883.228899801]: using default calibration URL
[ INFO] [1507905883.229043347]: camera calibration URL: file:///home/abdulrahman/.ros/camera_info/head_camera.yaml
[ INFO] [1507905883.229173285]: Unable to open camera calibration file [/home/abdulrahman/.ros/camera_info/head_camera.yaml]
[ WARN] [1507905883.229240846]: Camera calibration file /home/abdulrahman/.ros/camera_info/head_camera.yaml not found.
[ INFO] [1507905883.229314152]: Starting 'head_camera' (/dev/video0) at 640x480 via mmap (mjpeg) at 30 FPS
[ WARN] [1507905883.307729529]: sh: 1: v4l2-ctl: not found

[ WARN] [1507905883.310575079]: sh: 1: v4l2-ctl: not found

[ WARN] [1507905883.314483441]: sh: 1: v4l2-ctl: not found

[ WARN] [1507905883.316570883]: sh: 1: v4l2-ctl: not found

[ INFO] [1507905883.316693122]: V4L2_CID_FOCUS_AUTO is not supported
[ WARN] [1507905883.319229177]: sh: 1: v4l2-ctl: not found

[mjpeg @ 0x259eb20] Changeing bps to 8
[swscaler @ 0x25a8a60] deprecated pixel format used, make sure you did set range correctly
[swscaler @ 0x25a9a40] deprecated pixel format used, make sure you did set range correctly
[swscaler @ 0x25aabc0] deprecated pixel format used, make sure you did set range correctly
[swscaler @ 0x25a9460] deprecated pixel format used, make sure you did set range correctly
[swscaler @ 0x25a8c60] deprecated pixel format used, make sure you did set range correctly
[swscaler @ 0x25a8c60] deprecated pixel format used, make sure you did set range correctly
[swscaler @ 0x25a8c20] deprecated pixel format used, make sure you did set range correctly
[swscaler @ 0x25a9c00] deprecated pixel format used, make sure you did set range correctly
[swscaler @ 0x25a9460] deprecated pixel format used, make sure you did set range correctly

The ROS By Example code has not been updated for ROS Kinetic. However, you can find some hints on the changes you have to make in this thread on the ROS By Example forum.

To help anyone else that might stumble on this looking for a verbose answer (tweaks for anyone using a Trossen turret with an ArbotiX-M board, and an Xbox360 Kinect).

I got an Xbox360 Kinect that I bought from Goodwill for $8, (and a Trossen Robotics PhantomX turret with ax18-a dynamixels) working with this project in Ubuntu 14.04 using ROS indigo (yes, 14.04, downloaded and installed in July 2019), per the article below. The nearest_cloud tracking using the turret hardware is working well after getting freenect installed:

https://answers.ros.org/question/196455/kinect-installation-and-setup-on-ros-updated/

The pertinent bit of info from a post in that thread, that fixed the issue with the 360 Kinect:

Openni_launch does not work anymore under Indigo for Kinect , but freenect_launch does. So install freenect_launch and libfreenect :

sudo apt-get install libfreenect-dev

sudo apt-get install ros-indigo-freenect-launch
Then call :

roslaunch freenect_launch freenect.launch
If it still does not work , disconnect Kinect from usb and plug again.

One more thing , it didn't work for me when Kinect was connected through an USB 2.0 hub but directly in USB 3.0 input of pc.

The commands I launched to get head tracking working with the nearest_cloud after installing freenect, from top to bottom, were then:

roslaunch rbx2_bringup pi_robot_head_only.launch sim:=false
roslaunch rbx2_diagnostics monitor_dynamixels.launch
roslaunch rbx2_dynamixels head_tracker.launch sim:=false
roslaunch freenect_launch freenect.launch
roslaunch rbx2_vision nearest_cloud.launch
rosrun rviz rviz -d `rospack find rbx2_vision`/config/track_pointcloud.rviz

That should get a cheapo 360 Kinect reading-in the goods, and using it to accurately guide a dynamixel/arbotix-mx turret to track the nearest cloud of depth data recieved by nearby objects(I'm probably butchering terminology here, but hey it works on my rig). The code from trossens getting started guide for the Dynamixel 12's works fine for the 18's too (no need to modify). Also, I think you can probably run either of the last two commands instead of both, the last will just give you an rviz representation of what it's doing with the data and turret.

Another note for ArbotiX-M users (if you bought a nice Trossen PhantomX turret which comes with the arbotix-m board and not a dynamixeltousb or whatever like what the source I stumbled on was configured for):

The baud rate in most of the yaml and other config files is set to 1000000 which will not work with the Arbotix-MX. You have to set this to 115200, which is what the arbotix boards are configured for by default. Another notable quirk with the arbotix boards is that you have to load the ros.ino firmware onto it by using the older arduino 1.0.6 release. Newer arduino releases don't appear to support this board and there's a good chance the newest arduino won't recognize it, even if it shows up in /dev as ttyUSB0 or whatever.. Follow the tut's on the arbotix "getting started" guides by downloading and moving the hardware and sketch files where it says they should be. Guide is here:

https://learn.trossenrobotics.com/arbotix/arbotix-quick-start.html

Sorry for the tangent, just wanted to share what I had to do to get all of this functional. Also, thank you so much to Patrick Goebel for this brilliantly useful documentation on applied robotics. This is the single most useful document I've found related to depth tracking and robotics; nothing else I've followed has been nearly as useful at providing such a thorough introduction to ROS, even though I ran into issues as the repos for much of the source have changed dramatically since Trusty Tahr (but still actually work fine). Genuinely valuable documentation, and I encourage anyone reading this to purchase a copy of this persons work!

Thanks for the post @WyattAutomation! If anyone would like to issue some pull requests to get other parts of RBX working on ROS Kinetic, I would be happy to merge them. The biggest differences from Indigo are changes to the OpenCV API.

I'm an instant fan, and would certainly love to see maybe even an ROS Melodic update, and/or contribute any way possible (though I would likely need to study your own documentation for some time to even get up to speed to be able to make any meaningful contribution).

I stumbled upon your documentation not too long ago and was blown away with how useful it is. I've had that Trossen turret sitting on my desk, acting as an expensive cat-laser toy, feeling a bit like it was going to be a little too much for me to learn how to implement other code or my own code into it.

Because of your work, I now have a fully functional vision tracking system (and much more) with access to the underlying code, and to top it all off, your documentation that explains the whole thing in it's entirety; the underlying theory and application. Every single thing I need to learn what I need to learn and get my projects off the ground.

Some time ago, I trained PJreddie's YOLOv3 model (https://pjreddie.com/darknet/yolo/) using the Google Open Images V4 dataset to detect custom objects in live video. There weren't any decent guides that I could find on how to do that, which seemed odd as both (at the time) appeared to be the highest quality model and dataset publicly available for object recognition. I figured it out using fragments of other people's tutorials on training the model with other datasets, and produced documentation on the specific process of training it with OIV4, along with a couple scripts for dataprep:
https://github.com/WyattAutomation/Train-YOLOv3-with-OpenImagesV4

I very much need to return to that repo and fix the script for multiple classes (although I have a more manual way of training it to detect multiple classes), but the reason I haven't is due to the fact I've been spending nearly all of my time outside of the office, searching for a way to demonstrate it's application in robotics.

My goal is to implement something like the "YOLO" model into ROS and start working towards having an adaptable system capable of full task automation (whether I am able to find a way of using most of the original darknet source in C, or whether I have to rebuild it from the ground up for use in ROS in another language). I want to find out what the constraints are to someone like myself, with a pretty basic high-level understanding of the systems involved, being able to put together a robot that can do something like cook a grilled cheese sandwich, or do some other task as well or better than a human.

Anyway, working towards conditional programming in the presence of a detected object, is what I really would like to do with this, though getting to the newest version of ROS would also be a very useful step.

Again, thanks for the the docs and code; I've made more progress in a week than a did in the 4 months leading up to it, and it's opened up an entire Narnia of possibilities for my own projects. Let me know if you have any interest in working on something like what I mentioned; you're lightyears ahead of where I will likely ever be, but I can certainly contribute support at an end user level and tackle issues as they come in!

Thanks again!
-Gene Wyatt