panda_autograsp.panda_autograsp_server_ros

Module that sets up a number of services that enable the panda_autograsp_cli command line interface (CLI) to communicate with the components of the panda_autograps solution.

Classes

PandaAutograspServer([pose_calib_method, …]) Class used to create and call all the panda_autograsp components.
class panda_autograsp.panda_autograsp_server_ros.PandaAutograspServer(pose_calib_method='aruco_board', gazebo=False, bounding_box_enabled=False)[source]

Class used to create and call all the panda_autograsp components.

rvec

The rotation vector of the camera/world calibration.

Type:list
tvec

The translation vector of the camera/world calibration.

Type:list
__init__(pose_calib_method='aruco_board', gazebo=False, bounding_box_enabled=False)[source]
Parameters:
  • pose_calib_method (str, optional) – Calibration pattern type., by default read from main_config.yaml.
  • gazebo (bool, optional) – Specifies whether you want to use a gazebo simulation, by default false.
  • bounding_box_enabled (bool, optional) – Specifies whether you want to use a bounding box for the grasp detection.
get_pose_callback(pose_msg)[source]

Callback function of the ‘gqcnn_graps/pose’ subscriber. This function updates the self.pose_msg member variable.

Parameters:pose_msg (geometry_msgs.PosedStamed) – Grasp pose msgs.
msg_filter_callback(color_image, color_image_rect, depth_image_rect, camera_info_hd, camera_info_qhd, camera_info_sd)[source]

Callback function of the message filter. This message filter subscribed to a number of camera topics which are required by the panda_autograsp solution. :param color_image: The color image. :type color_image: sensor_msgs.msg.Image :param color_image_rect: The rectified color image. :type color_image_rect: sensor_msgs.msg.Image :param depth_image_rect: The depth image. :type depth_image_rect: sensor_msgs.msg.Image :param camera_info_hd: The HD camera info topic. :type camera_info_hd: sensor_msgs.msg.CameraInfo :param camera_info_qhd: The QHD camera topic. :type camera_info_qhd: sensor_msgs.msg.CameraInfo :param camera_info_sd: The SD camera topic. :type camera_info_sd: sensor_msgs.msg.CameraInfo

compute_grasp_service(req)[source]

This service is used for computing a vallid grasp out of the sensor data. This is done by calling the main grasp computation service of the grasp_planner_server.py module with the sensor data as its input.

Parameters:req (panda_autograsp.msg.ComputeGrasp) – Empty service request.
Returns:Returns a bool to specify whether the plan was executed successfully.
Return type:bool
plan_grasp_service(req)[source]

This service can be used to plan for the by the compute_grasp_service() computed grasp pose.

Parameters:req (panda_autograsp.msg.PlanGrasp) – Empty service request.
Returns:Returns a bool to specify whether the plan was executed successfully.
Return type:bool
plan_place_service(req)[source]

This service computes the movement plan for placing the object at the desired goal position. The place goal position can be changed by modifying the cfg/main_config.yaml document.

Parameters:req (panda_autograsp.msg.PlanPlace) – Empty service request.
Returns:Returns a bool to specify whether the plan was executed successfully.
Return type:bool
visualize_grasp_service(req)[source]

This service can be used to visualize the planned grasp.

Parameters:req (panda_autograsp.msg.VizualizeGrasp) – Empty service request.
Returns:Returns a bool to specify whether the plan was executed successfully.
Return type:bool
execute_grasp_service(req)[source]

This service is used to execute the computed grasp.

Parameters:req (panda_autograsp.msg.ExecuteGrasp) – Empty service request.
Returns:Returns a bool to specify whether the plan was executed successfully.
Return type:bool
calibrate_sensor_service(req)[source]

This service can be used to perform the sensor/world calibration. To do this you need to place a chessboard/aruco board in the bottom left corner of the robot table. You have to define which calibration pattern you use in the cfg/main_config.yaml file. After a successful camera/world transformation matrix is computed this matrix is send to the tf2_broadcaster module and the sensor frame position is updated.

Parameters:req (panda_autograsp.msg.ExecuteGrasp) – Empty service request.
Returns:Returns a bool to specify whether the plan was executed successfully.
Return type:bool
aruco_board_pose_estimation()[source]

Function that performs the camera/world calibration by using a Aruco board as a reference.

Returns:
  • retval (bool) – Calibration success bool.
  • rvec (list) – Rotation vector.
  • tvec (list) – Translation vector.
chessboard_pose_estimation()[source]

Function that performs the camera/world calibration by using a chessboard as a reference.

Returns:
  • retval (bool) – Calibration success bool.
  • rvec (list) – Rotation vector.
  • tvec (list) – Translation vector.
broadcast_camera_frame(calib_type='aruco_board')[source]

Send the sensor pose we acquired from the calibration to the tf2_broadcaster so that it can broadcast the sensor camera frame.

Parameters:calib_type (str, optional) – The calibration pattern you want to use, by default “aruco_board”