# Grasp Pose Generation and Visualization ## Installation ```bash pip install torch numba numpy "open3d>=0.16.0" tqdm ``` ## Part 1: Grasp Pose Generation Grasp poses are generated based on mesh geometry and force closure analysis. The annotation pipeline follows the process from [AnyGrasp SDK](https://github.com/graspnet/anygrasp_sdk) and supports GPU acceleration. The output includes both dense annotations and filtered sparse annotations. We use the **sparse annotation results** for grasping. The output is an `N x 17` array with the following structure: ``` [score, width, height, depth, view_angles (9), points (3), obj_ids (1)] ``` | Field | Dimensions | Description | |-------|------------|-------------| | `score` | 1 | Grasp quality score (0.1-1.0, lower is better) | | `width` | 1 | Grasp width | | `height` | 1 | Grasp height | | `depth` | 1 | Grasp depth | | `view_angles` | 9 | Can be reshaped to a 3x3 rotation matrix | | `points` | 3 | 3D position of grasp center | | `obj_ids` | 1 | Object identifier | The definitions of `width`, `height`, and `depth` follow the conventions in the GraspNet and AnyGrasp papers. The `view_angles` combined with `points` and `depth` form the transformation `T_obj_tcp`. The gripper frame definition is consistent with the GraspNet API: