SLAM

So, I’m finally getting into the field and I’m pretty excited to launch my work and share my knowledge and projects updates here. I’ll provide a brief description for the topic later here, but for now I’m going to share the highlights I’ve been through.

I’ve tried coding the Monte Carlo Localization algorithm using a simple LEGO robot (NXT) and MATLAB.(I got a dedicated page for this project here)

Currently, I’m using Intel RealSense R200 camera with ROS indigo. To make it real, you need to do the following:

0) Get Linux Ubuntu 14.04 installed on your machine

  1. Install ROS indigo
  2. Get the needed libraries for the camera
  3. Install needed packages for SLAM and converting 3D point cloud to 2D laser scan
  4. Tweak some parameters and settings to make it work

So, for a sensor hardware, I’m using Intel RealSense R200 which is a lightweight camera with imaging abilities that include capturing RGB images and building 3D depth pictures for the environment (more information here: https://software.intel.com/en-us/RealSense/R200Cameral). I’m using it to build a 3D map for the environment for the purpose of doing SLAM.

For the software part, I installed Ubuntu 14.04 (http://www.wikihow.com/Install-Ubuntu-Linux) on the machine I’ll be using which is a pretty straightforward process (whether you already have a bootable CD/USB or download it online and make it yourself). Next, I installed ROS indigo which is a simple process too and the ROS wiki page is explaining it very well: http://wiki.ros.org/indigo/Installation/Ubuntu

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116
sudo apt-get update
sudo apt-get install ros-indigo-desktop-full
apt-cache search ros-indigo
sudo rosdep init
rosdep update
echo "source /opt/ros/indigo/setup.bash" >> ~/.bashrc
source ~/.bashrc
sudo apt-get install python-rosinstall 

Next is to follow the ROS tutorials here: http://wiki.ros.org/ROS/Tutorials#Beginner_Level

Next, I need the package for the camera. Again, ROS wiki has quite clear and simple instructions explained here: http://wiki.ros.org/realsense_camera
However, I’m putting all the steps together here for a brief reference whenever I need to get to it and do it quickly without having to go through all the pages! If something goes wrong, then you probably need to go and check the page at wiki ROS because there are some alternatives for installation and a dedicated section for troubleshooting.

wget -O enable_kernel_sources.sh http://bit.ly/en_krnl_src
bash ./enable_kernel_sources.sh
sudo apt-get install ros-indigo-librealsense
sudo apt-get install 'ros-indigo-realsense-camera'

So, here is the sequence to make it work (including the non-related stuff to realsense, but necessary to make it work)
So, first we need to set up a new workspace, a catkin one (more on catkin workspace here: http://wiki.ros.org/catkin/workspaces)

mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
catkin_init_workspace
cd ~/catkin_ws/
catkin_make
source devel/setup.bash

Next, to try the camera, type in the terminal:

roslaunch realsense_camera r200_nodelet_default.launch

and open a new terminal and in it:

rosrun rviz rviz -d $(rospack find realsense_camera)/rviz/realsenseRvizConfiguration1.rviz

This should only give RGB image streaming and I was very frustrated at this stage because I couldn’t understand why I can’t get the pointcloud I wanted! After digging in more stuff to read, I found out that running camera nodelet by default won’t give the pointcloud and I need to type this:

roslaunch realsense_camera r200_nodelet_rgbd.launch

and again, in a new terminal, type:

rviz -d realsense_camera/rviz/realsenseRvizConfiguration1.rviz

I got something like this:

Pointcloud

Note:
Sometimes, it may crash or doesn’t even initiate. Try unplugging the camera and plug again. I wish I could know about this to save me many hours of frustration and confusion

Now, let’s turn this into interesting stuff by doing the mapping thing! I’ve used the RTAB-Map library for the mapping and it seems a good one for me. (more information here: http://wiki.ros.org/rtabmap_ros) To install it, simply type:

sudo apt-get install ros-indigo-rtabmap-ros

To get it running, type:

roslaunch realsense_camera r200_nodelet_rgbd.launch

and in a another terminal type:

roslaunch rtabmap_ros rgbd_mapping.launch rtabmap_args:="--delete_db_on_start" depth_registered_topic:=/camera/depth_registered/sw_registered/image_rect_raw

I got this:

Mappi

Converting from pointcloud to laser scan. So, to make it work:
First, launch the camera with pointcloud:

 roslaunch realsense_camera r200_nodelet_rgbd.launch

Next, we want to convert the pointcloud to laser scan. So we use the package: depthimage_to_laserscan (wiki page here: http://wiki.ros.org/depthimage_to_laserscan). However, it expects messages under the topic image. So, I’ve created a package named pcloud_laser and created a launch file in it with the following lines:

<launch>
<node name="depthimage_to_laserscan" pkg="depthimage_to_laserscan" type="depthimage_to_laserscan" >
<remap from="image" to="/camera/depth/image"/>
</node>
<launch>

So, next step is

roslaunch pcloud_laser fake_laser.launch

Finally, to visualize, use rviz

rosrun rviz rviz

So, here is ow it looks with only pointcloud

Cloud2laser

and now with only displaying laser scan

Cloud2laser2

and here is a snapshot for laser scan overlayed on image

CloudLASER3

Okay now we’re good to go. We’ll use hector _slam package because it doesn’t need odometery. So first step is just trying it out and I’ll just ignore all about the robot for now. So, after installing, go to the launch file and edit the base_frame name as follows:

roscd hector_mapping
cd launch
gedit mapping_default.launch

and edit it to be something like:

<?xml version="1.0"?>

<launch>
  <arg name="tf_map_scanmatch_transform_frame_name" default="scanmatcher_frame"/>
  <arg name="base_frame" default="base_footprint"/>
  <arg name="odom_frame" default="nav"/>
  <arg name="pub_map_odom_transform" default="true"/>
  <arg name="scan_subscriber_queue_size" default="5"/>
  <arg name="scan_topic" default="scan"/>
  <arg name="map_size" default="2048"/>

  <node pkg="hector_mapping" type="hector_mapping" name="hector_mapping" output="screen">

    <!-- Frame names -->
    <param name="map_frame" value="/map" />
    <param name="base_frame" value="/camera_link" />
    <param name="odom_frame" value="/camera_link" />

    <!-- Tf use -->
    <param name="use_tf_scan_transformation" value="true"/>
    <param name="use_tf_pose_start_estimate" value="false"/>
    <param name="pub_map_odom_transform" value="true"/>

    <!-- Map size / start point -->
    <param name="map_resolution" value="0.050"/>
    <param name="map_size" value="$(arg map_size)"/>
    <param name="map_start_x" value="0.5"/>
    <param name="map_start_y" value="0.5" />
    <param name="map_multi_res_levels" value="2" />

    <!-- Map update parameters -->
    <param name="update_factor_free" value="0.4"/>
    <param name="update_factor_occupied" value="0.9" />
    <param name="map_update_distance_thresh" value="0.4"/>
    <param name="map_update_angle_thresh" value="0.01" />
    <param name="laser_z_min_value" value = "-1.0" />
    <param name="laser_z_max_value" value = "1.0" />

    <!-- Advertising config -->
    <param name="advertise_map_service" value="true"/>

    <param name="scan_subscriber_queue_size" value="$(arg scan_subscriber_queue_size)"/>
    <param name="scan_topic" value="$(arg scan_topic)"/>

    <!-- Debug parameters -->
    <!--
      <param name="output_timing" value="false"/>
      <param name="pub_drawings" value="true"/>
      <param name="pub_debug_output" value="true"/>
    -->
    <param name="tf_map_scanmatch_transform_frame_name" value="$(arg tf_map_scanmatch_transform_frame_name)" />
  </node>

 <!-- <node pkg="tf" type="static_transform_publisher" name="base_to_laser_broadcaster" args="0 0 0 0 0 0 base_link camera_link 100"/> -->

</launch>

Also we need to edit the tutorial launch file as follows:

roscd hector_slam_launch
cd launch
gedit tutorial.launch

and then:

<?xml version="1.0"?>

<launch>

  <arg name="geotiff_map_file_path" default="$(find hector_geotiff)/maps"/>

  <param name="/use_sim_time" value="false"/>

  <node pkg="rviz" type="rviz" name="rviz"     args="-d $(find hector_slam_launch)/rviz_cfg/mapping_demo.rviz"/>

  <include file="$(find hector_mapping)/launch/mapping_default.launch"/>

  <include file="$(find hector_geotiff)/launch/geotiff_mapper.launch">
    <arg name="trajectory_source_frame_name" value="scanmatcher_frame"/>
    <arg name="map_file_path" value="$(arg geotiff_map_file_path)"/>
  </include>

</launch>

And then

 roslaunch hector_slam_launch tutorial.launch 

I’d get something like this:

Screenshot from 2017-10-02 21:19:53

Hint: always try rqt_graph as an asset to debug how things work (or don’t!)

So far so good. We could do mapping and localization as shown and to get a “moving” robot, we only need a mobile robot as hardware and the navigation stack running so that we have a controller that sends cmd_vel to move the robot.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s