panda_autograsp.grasp_planners.gqcnn_grasp_planner¶
This module uses the BerkleyAutomation GQCNN grasp detector to compute a valid grasp out of RBG-D sensor data.
Classes
GQCNNGrasp () |
Class for storing the computed grasps. |
GraspPlanner ([model, sensor_type]) |
Class used to compute the grasp pose out of the RGB-D data. |
-
class
panda_autograsp.grasp_planners.gqcnn_grasp_planner.
GQCNNGrasp
[source]¶ Class for storing the computed grasps. This class is a trimmed down version of the original
gqcnn.grasping.policy.policy.GraspAction
.-
PARALLEL_JAW
¶ PARALLEL_JAW
Specifies parallel jaw gripper type information
-
-
class
panda_autograsp.grasp_planners.gqcnn_grasp_planner.
GraspPlanner
(model='FC-GQCNN-4.0-PJ', sensor_type='kinectv2')[source]¶ Class used to compute the grasp pose out of the RGB-D data.
-
cfg
¶ autolab_core.YamlConfig
The graspyaml
configuration file.
-
policy_cfg
¶ autolab_core.YamlConfig
The policyyaml
configuration file.
-
sensor
¶ perception.Kinect2Sensor
The sensor object.
-
grasping_policy
¶ gqcnn.CrossEntropyRobustGraspingPolicy
The grasp policy.
-
read_images
(skip_registration=False)[source]¶ Retrieves data frames from the sensor.
Parameters: skip_registration ( bool
, optional) – If True, the registration step is skipped, by default False.Returns: tuple
ofperception.ColorImage
,perception.DepthImage
,perception.IrImage
,numpy.ndarray
– The ColorImage, DepthImage, and IrImage of the current frame.
Raises: exceptions.RuntimeError
– If the Kinect stream is not running.
-
plan_grasp
()[source]¶ Samples the image and computes possible grasps.
Returns: Computed optimal grasp. Return type: GQCNNGrasp
-
plan_grasp_bb
(bounding_box=None)[source]¶ Samples the image and computes possible grasps while taking into account the supplied bounding box.
Parameters: bounding_box ( perception.RgbdDetection
, optional) – A bounding box specifying the object, by default NoneReturns: Computed optimal grasp. Return type: GQCNNGrasp
-
plan_grasp_segmask
(segmask)[source]¶ Samples the image and computes possible grasps while taking into account the supplied segmask.
Parameters: segmask ( perception.BinaryImage
) – Binary segmask of detected objectReturns: Computed optimal grasp. Return type: GQCNNGrasp
-
execute_policy
(rgbd_image_state, grasping_policy, pose_frame)[source]¶ Executes a grasping policy on an RgbdImageState.
Parameters: - rgbd_image_state (
gqcnn.RgbdImageState
) – Thegqcnn.RgbdImageState
that encapsulates the depth and color image along with camera intrinsics. - grasping_policy (
gqcnn.grasping.policy.policy.GraspingPolicy
) – Grasping policy to use. - pose_frame (
str
) – Frame of reference to publish pose in.
- rgbd_image_state (
-