DexDiffuser: Generating Dexterous Grasps with Diffusion Models

Division of Robotics, Perception and Learning (RPL), KTH
(* equal contribution)
IEEE Robotics and Automation Letters (RA-L)

Our method can generate high-quality grasp poses for unknown objects based on 3D partial point clouds. We conducted our study in both simulation and real world.

Abstract

We introduce DexDiffuser, a novel dexterous grasping method that generates, evaluates, and refines grasps on partial object point clouds. DexDiffuser includes the conditional diffusion-based grasp sampler DexSampler and the dexterous grasp evaluator DexEvaluator. DexSampler generates high-quality grasps conditioned on object point clouds by iterative denoising of randomly sampled grasps. We also introduce two grasp refinement strategies: Evaluator-Guided Diffusion and Evaluator-based Sampling Refinement. The experiment results demonstrate that DexDiffuser consistently outperforms the state-of-the-art multi-finger grasp generation method FFHNet with an, on average, 9.12% and 19.44% higher grasp success rate in simulation and real robot experiments, respectively

Real Robot Grasping

Grasps for the toy plane

BibTeX

@ARTICLE{10753039,
      author={Weng, Zehang and Lu, Haofei and Kragic, Danica and Lundell, Jens},
      journal={IEEE Robotics and Automation Letters}, 
      title={DexDiffuser: Generating Dexterous Grasps With Diffusion Models}, 
      year={2024},
      volume={9},
      number={12},
      pages={11834-11840},
      doi={10.1109/LRA.2024.3498776}}

Acknowledgements

This work was supported by the Swedish Research Council, the Knut and Alice Wallenberg Foundation, the European Research Council (ERC-BIRD-884807).

The authors also would like to express their gratitude to Zheyu Zhuang for providing insightful feedbacks and to Ning Zhou for contributing an RTX 3090 graphics card..