基于可微物理约束的六自由度抓取位姿检测

    6-DoF Grasp Pose Detection Based on Differentiable Physical Constraints

    • 摘要: 当前的六自由度抓取位姿检测方法在抓取稳定性与物理可行性的显式建模方面仍存在不足,难以有效保障杂乱场景下机器人执行抓取任务的成功率。为此,本文提出一种基于可微物理约束的端到端六自由度抓取位姿检测方法。该方法构建对向点、表面平整度、重心接近度与接触容差四类物理约束,并将其以可微正则项形式嵌入网络训练过程,以引导抓取位姿满足物理稳定性要求。同时,为增强网络对抓取点局部几何结构的感知能力,引入了多尺度圆柱体采样与特征融合模块。此外,针对抓取参数间存在的潜在依赖关系,设计了基于自注意力机制的多参数抓取头,在实现任务解耦的同时提升抓取参数预测的一致性。实验结果表明,所提方法在大规模抓取数据集GraspNet-1Billion上的平均精度较现有方法提升4.66个百分点,在真实机器人实验中的平均抓取成功率提升7.83个百分点,验证了该方法在实际抓取任务中的有效性与应用潜力。

       

      Abstract: Existing six-degree-of-freedom (6-DoF) grasp pose detection methods remain weak in explicitly modeling grasp stability and physical feasibility, making it difficult to ensure high grasp success rates for robotic execution in cluttered scenes. To address this limitation, we propose DPCGrasp, an end-to-end 6-DoF grasp detection method that incorporates differentiable physical constraints. The proposed method introduces four physically motivated constraints: antipodal alignment, surface flatness, center-of-mass proximity, and contact tolerance. These constraints are formulated as differentiable regularization terms and integrated into the training objective to promote physically plausible grasp configurations. To enhance local geometric understanding around candidate grasp points, we design a multi-scale cylindrical sampling and feature fusion module. Furthermore, we develop a self-attention-based multi-parameter grasp prediction head to capture latent dependencies among grasp parameters, improving the consistency of parameter outputs under task-decoupled learning. Experimental results show that the proposed method improves the average precision by 4.66 percentage points on the large-scale GraspNet-1Billion dataset compared to state-of-the-art methods. In real-world robotic experiments, it attains a 7.83 percentage points increase in average grasp success rate, confirming its effectiveness and practical feasibility in actual graspingscenarios.

       

    /

    返回文章
    返回