Sensor calibration instructions

Camera calibration

The Kinect camera can be calibrated using the kinect2_calibration module of the IAI_kinect2 package. The instructions for performing this calibration can be found here.

Warning

Be sure to have the camera on for at least 15 min before performing the calibration in order for the camera parameters to fully stabilize.

Robot eye-hand calibration

To be able to control the real robot, we also need to know the location of the robot relative to the camera. A robot eye-hand calibration is therefore performed at the start of the panda-autograsp solution. During this eye-hand calibration, you are asked to place a calibration pattern on the upper left corner of the table. The position of this pattern and the robot has to be measured and specified in the ./cfg/calib_frames_poses.yaml. You can also adjust the position and orientation of both the sensor and calibration frames by using the dynamic reconfigure window that is opened when the panda-autograsp solution is started. The changes made in the dynamic reconfigure window are saved to the ./cfg/calib_frames_poses.yaml. file when you close the panda-autograsp solution.

Note

The panda-autograsp algorithm supports two types of calibration patterns, an Aruco Board and a chessboard. Due to its higher calibration accuracy, by default, the algorithm assumes you are using an Aruco Board. If you want to use the chessboard instead, you have to change the pose_estimation_calib_board parameter in the ./cfg/main_config.cfg file.

Dynamic reconfigure window

As specified above you can use the dynamic reconfigure gui (shown below) to position and orientation of both the calibration and sensor frame. This is done by going to the tf_broadcaster tab and adjusting the sliders.

dynamic reconfigure window

Dynamic reconfigure window that can be used to change the sensor and calibration frames.

Generate chessboard pattern

Different chessboard patterns can be found in the IAI_kinect2/kinect2_calibration repository. If you specified, you are using a chessboard the algorithm currently assumed you are using 5x7x0.03 chessboard. You can change it by changing the chessboard_settings of the ./cfg/main_config.cfg file.

Generate Aruco Board

To generate the Aruco Board please run the generate_arucoboard.py python script. This script is found in the ./scripts folder. After you run this script a aruco board can be found in the ./data/calib folder.

Package use instructions

Configuration instructions

All of the panda-autograsp settings can be found in the ./cfg/main_config.cfg file. As described below, you can temporarily override some of These settings by supplying launch arguments.

Add planning constraints

To add extra planning constraints to the scene, you can add additional json files to the /home/<USER>/.panda_simulation folder. This folder is created the first time the panda-autograsp solution is launched. A guide on how this can be done can be found here.

Run instructions

panda-autograsp algorithm in action

Overview of the panda-autograsp algorithm (moveit_perception module disabled).

To start the panda-autograsp solution, please open a terminal and start the panda-autograsp solution using the following command:

roslaunch panda_autograsp panda_autograsp.launch

Additionally, this launch file can also be supplied with a number of launch file arguments. These arguments can be used to temporarily override the settings you configured in the ./cfg/main_config.cfg file. The available launch file arguments can be found below. After the panda-autograsp solution has launched, you can launch the command-line interface (CLI), in a second terminal, using the following command in a second terminal window:

rosrun panda_autograsp panda_autograsp_cli.py

Note

Before executing the commands mentioned above, make sure you the panda-autograsp package is build and sourced. Further, you have to make sure that the pretrained GQCNN networks are not present in the ./panda_autograsp/models folder. The panda_autograsp.launch will ask whether you want to install these models if they are not found. You can also install them by running the following commands from within the catkin workspace src folder:

./gqcnn/scripts/downloads/models/download_models.sh

Launch file arguments

The panda_autograsp launch file accepts the following launch file arguments:

  • real: Specifies whether you want to use the panda-autograsp solution on the real robot, by default False.
  • gazebo: Specifies wheter the gazebo gui should be loaded.
  • external_franka_control: Set this to true if you want to load the franka_control node on another pc on the same network.
  • robot_ip: Set this to the robot ip if your working with the real robot.
  • rviz_gui: Specifies wheter the rviz gui should be loaded.
  • calib_type: The robot hand-eye calibration board type (chessboard vs arucoboard), overwrites the default that is set in the ./cfg/main_config.cfg file.
  • debug: If true the verbosity of the ROS log messages will be increased, and the process name will be displayed with each log messages, by default set to false.
  • moveit_perception: This enables the Moveit perception module which integrates sensor data into the path planning, by default set to false.
  • octomap_type: The data type used by the Moveit perception module (pointcloud or depthmap), defaults to depthmap.
  • octomap_resolution: The resolution of the octomap.
  • moveit_add_scene_collision_objects: This specifies if the robot table should be added as a collision object to the moveit planning space, By default set to true.
  • grasp_solution: Specify which grasping solution you want to use, this overwrites the solution that is set in the ./cfg/main_config.cfg file.
  • use_bounding_box: If set to true the bounding box that is specified in the main_config.yaml file will be used when planning the grasp.

Grasping solutions

Currently, the following grasping solutions are supported:

  • GQCNN-2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics. Trained on Dex-Net 2.0.
  • GQCNN-2.1: Extension off GQ-CNN 2.0. The network that was trained on Dex-Net 2.0 is improved using RL in simulations.
  • GQCNN-4.0-PJ: Improvement of the GQCNN-2.0 which computes more accurate grasps. This network module does additionally state whether it is better to grasp an object using a suction or a parallel jaw gripper.
  • FC-GQCNN-4.0-PJ: Modification of GQCNN-4.0-PJ in which a fully connected grasp quality CNN (FC-GQ-CNN) is used. This model has a faster grasp computation time and a more accurate grasp.

Moveit perception

Change settings

Appart from the launch file setings additional moveit perception settings can be found in the ./panda_moveit_config/config/sensor_kinect_deptmap.yaml and sensor_kinect_pointcloud.yaml files.

Clear the octomap

The octomap can be reset during execution by calling the reset_octomap service. This is done using the rosservice call /reset_octomap command.

Panda simulations

A gazebo simulation of the panda robot can be used by setting the real and gazebo launch file arguments.

gazebo_simulation

Overview of the gazebo simulation environment.

Warning

As the simulated kinect outputs the openni_launch to simulate the kinect while I am using the iai_kinect2 package the simulation is not fully ready. To get it to work one has to or replace the iai_kinect2 package in the real setup with the openni_launch or write a modified version of the iai_kienct2/kinect2_bridge that works with the simulated camera instead of looking for a real one.

Singularity container instructions

Container aliases

To increase convenience a number of bash aliases were added to the containers:

  • agrasp: activates the autograsp conda environment
  • dgrasp: deactivates the autograsp conda environment
  • dconda: deactivates the conda enviroment
  • rsource: sources the ROS setup.bash file
  • rossu sources the setup.bash file at ./devel/setup.bash
  • pbuild: panda-autograsp package build command. Make sure your inside the panda-autograsp catkin workspace.

You can unset these aliases or add additional aliases by creating a ~/.singularity_bash_aliases file in your user home directory. If such a ~/.singularity_bash_aliases file is present in your home folder, it will be sourced after the container its ~/.singularity_bash_aliases file.

Container .bashrc

A .singularity_bashrc file is present inside the containers. This file is sourced when you run the container. You can overload all of the commands, aliases and settings contained in this .singularity_bashrc file by creating your own ~/.singularity_bashrc file in your home folder. If such a ~/.singularity_bashrc file is present in your home folder, it will be sourced after the container its ~/.singularity_bashrc file.

Container run instructions

You can use the singularity shell, start and run commands to interact with the container. You are advised to use the run command since this also sources a .singularity_bashrc file that is present in each of the containers. This file can be used as a .bashrc file. You can run the singularity container using one of the following run commands:

  • With Nvidia GPU: singularity run --nv <YOUR_CONTAINER_NAME>
  • Without Nvidia GPU: singularity run <YOUR_CONTAINER_NAME>

Note

Additionally, you can also add the --writable parameter to the run command to receive write permissions.

Warning

Please, note that singularity currently only links to the NVIDIA drivers when you are using the container in read mode. As a result when you start the container using the --writable tag you can not leverage the GPU computation capabilities of your NVIDIA graphics card.

Container permissions

By default, your user can not write to the container folder from outside of the container. If you did build the singularity container as a writeable folder, you could give your user write and read access from outside the singularity container by:

  1. Changing the group owner to your user group.
sudo chgrp -R <YOUR_USER_NAME> ./<YOUR_CONTAINER_NAME>
  1. Giving your user group _read and write_ access to the <YOUR_CONTAINER_NAME folder.
sudo chmod -R g+rwx ./<YOUR_CONTAINER_NAME>

Add a visual code IDE to the singularity container

Visual studio code can be added to the singularity container in order to enable easy code debugging. This is done as follows:

  1. Run your container using the sudo singularity run --writable <YOUR_CONTAINER_NAME>
  2. Install visual code or visual code-insiders using the following bash commands:
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo install -o root -g root -m 644 microsoft.gpg /etc/apt/trusted.gpg.d/
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main" > /etc/apt/sources.list.d/vscode.list'
sudo apt-get install apt-transport-https
sudo apt-get update
sudo apt-get install code # or code-insiders

Note

Since visual code requires the /run folder you need to add the -B /run argument when running a singularity container. For more information, see this issue.

Troubleshooting information

When you run into problems, please check the issues tab for a solution. Some commonly encountered issues are displayed below.

Known issues

  • I get a [Err] [REST.cc:205] Error in REST request error when the gazebo simulation is opened -> #139
  • The planner does not find a path when moveit_perception is enabled -> #141
  • I get a return matrix @ rmat error -> #142
  • I get a cannot import name <MODULE> error -> #144
  • I get a ValueError while trying to use catkin build inside the container -> #149
  • I get a CUDA driver version is insufficient error when inside the container -> #150
  • I get a module cv2.aruco not found error -> #164