Skip to yearly menu bar Skip to main content


Poster

Dexterous Grasp Transformer

Guo-Hao Xu · Yi-Lin Wei · Dian Zheng · Xiao-Ming Wu · Wei-Shi Zheng


Abstract: In this work, we propose a novel discriminative framework for dexterous grasp generation, named $\textbf{D}$exterous $\textbf{G}$rasp $\textbf{TR}$ansformer ($\textbf{DGTR}$), capable of predicting a diverse set of feasible grasp poses by processing the object point cloud with only $\textbf{one forward pass}$. We formulate dexterous grasp generation as a set prediction task and design a transformer-based grasping model for it. However, we identify that this set prediction paradigm encounters several optimization challenges in the field of dexterous grasping and results in restricted performance. To address these issues, we propose progressive strategies for both the training and testing phases. First, the dynamic-static matching training (DSMT) strategy is presented to enhance the optimization stability during the training phase. Second, we introduce the adversarial-balanced test-time adaptation (AB-TTA) with a pair of adversarial losses to improve grasping quality during the testing phase. Experimental results on the DexGraspNet dataset demonstrate the capability of DGTR to predict dexterous grasp poses with both high quality and diversity. Notably, while keeping high quality, the diversity of grasp poses predicted by DGTR significantly outperforms previous works in multiple metrics without any data pre-processing. Codes are available at https://github.com/iSEE-Laboratory/DGTR.

Live content is unavailable. Log in and register to view live content