广东工业大学学报 ›› 2021, Vol. 38 ›› Issue (04): 65-70.doi: 10.12052/gdutxb.200138

• • 上一篇    下一篇

基于双目强约束的直接稀疏视觉里程计

叶培楚, 李东, 章云   

  1. 广东工业大学 自动化学院,广东 广州 510006
  • 收稿日期:2020-10-16 出版日期:2021-07-10 发布日期:2021-05-25
  • 通信作者: 李东(1983-),男,副教授,博士,硕士生导师,主要研究方向为模式识别、机器学习、人脸识别和机器视觉,E-mail:dong.li@gdut.edu.cn E-mail:dong.li@gdut.edu.cn
  • 作者简介:叶培楚(1996-),男,硕士研究生,主要研究方向为视觉SLAM、特征匹配、人脸识别
  • 基金资助:
    广东省自然科学基金资助项目(2021A1515011867)

Direct Sparse Visual Odometer Based on Enhanced Stereo-Camera Constraints

Ye Pei-chu, Li Dong, Zhang Yun   

  1. School of Automation, Guangdong University of Technology, Guangzhou 510006, China
  • Received:2020-10-16 Online:2021-07-10 Published:2021-05-25

摘要: 为了提高双目直接稀疏里程计(Stereo Direct Sparse Odometry, Stereo DSO)的定位速度和精度, 使得移动机器人可以更有效地执行任务, 提出了一种基于双目强约束的直接稀疏视觉里程计系统。基于直接法的即时定位与地图构建(Simultaneous Localization and Mapping, SLAM)系统直接对图像像素构建光度误差优化函数, 无需提取特征点, 克服了基于特征点法的SLAM系统在弱纹理场景下不鲁棒的缺陷, 并且在前端跟踪阶段效率更高。提出一种快速、准确的双目初始化方法, 结合三角化不确定性为不同类型的点赋予不同的深度范围, 加速深度滤波器的收敛。同时, 在运动估计阶段引入双目约束, 使得该系统在绝对尺度上的定位更加准确。通过在公开的KITTI数据集11个序列上进行实验, 实验结果表明所提出的算法在定位精度上明显优于同样采用直接法的Stereo Large Scale Direct SLAM(LSD-SLAM2)和Stereo DSO, 并达到与基于特征点法的ORB-SLAM3相近的水平, 为直接法SLAM提供一种更优的定位方案。

关键词: 视觉里程计, 直接法, 移动机器人, SLAM

Abstract: A new direct SLAM (Simultaneous Localization And Mapping) system with enhanced stereo-camera constrains based on Stereo Direct Sparse Odometry (Stereo DSO) is presented. As a direct SLAM method, any image pixel with sufficient intensity gradient can be utilized, which makes it robust even in featureless areas. Two-stage checking combining SAD (Sum of Absolute Differences) with NCC (Normalized Cross Correlation) matching method is used for stereo matching and triangulated uncertainty concerned to accelerate the convergence of depth filters. To estimate the accurate scale of the environment, static stereo constrains are added to the tracking module. Our evaluation on KITTI demonstrates that the proposed system achieves the better performance than the state of the art direct SLAM systems such as LSD-SLAM2 and Stereo DSO, and achieves the comparable performance with ORB-SLAM3 which is the state of the art feature SLAM. The proposal provides mobile robots with a new direct SLAM system to explore the environment more precisely and robustly.

Key words: visual odometer, direct method, mobile robot, simultaneous localization and mapping(SLAM)

中图分类号: 

  • TP391.4
[1] CADNA C, CARLONE L, CARRILLO H, et al. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age [J]. IEEE Transactions on Robotics, 2016, 32(6): 1309-1332.
[2] FUENTES-PACHECO J, RUIZ-ASCENCIO J, RENDON-MANCHA J M. Visual simultaneous localization and mapping: a survey [J]. Artificial Intelligence Review, 2015, 43(1): 55-81.
[3] 朱福利, 曾碧, 曹军. 基于粒子滤波的 SLAM 算法并行优化与实现[J]. 广东工业大学学报, 2017, 34(2): 92-96.
ZHU F L, ZENG B, CAO J. Parallel optimization and implementation of SLAM algorithm based on particle filter [J]. Journal of Guangdong University of Technology, 2017, 34(2): 92-96.
[4] WEI W, TAN L, JIN G, et al. A survey of UAV visual navigation based on monocular SLAM[C]//2018 IEEE 4th Information Technology and Mechatronics Engineering Conference (ITOEC). Chongqing: IEEE, 2018: 1849-1853.
[5] ROS G, SAPPA A, PONSA D, et al. Visual slam for driverless cars: a brief survey[C]//Intelligent Vehicles Symposium (IV) Workshops. Alcala de Henares: IEEE, 2012, 2.
[6] LIANG M J, MIN H Q, LUO R. Graph-based SLAM: a survey [J]. Robot, 2013, 35(4): 500-512.
[7] KLEIN G, MURRAY D. Parallel tracking and mapping for small AR workspaces[C]//2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. Nara: IEEE, 2007: 225-234.
[8] MUR-ARTAL R, MONTIEL J M M, TARDOS J D. ORB-SLAM: a versatile and accurate monocular SLAM system [J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163.
[9] MUR-ARTAL R, TARDOS J D. ORB-SLAM2: an open-source slam system for monocular, stereo, and RGB-D cameras [J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.
[10] CAMPOS C, ELVIRA R, RODRIGUEZ J J G, et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial and multi-map SLAM[J]. arXiv: 2007. 11898.
[11] KERL C, STURM J, CREMERS D. Dense visual SLAM for RGB-D cameras[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo: IEEE, 2013: 2100-2106.
[12] ENGEL J, SCHOPS T, CREMERS D. LSD-SLAM: large-scale direct monocular SLAM[C]//European Conference On Computer Vision. Zurich: Springer, 2014: 834-849.
[13] ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(3): 611-625.
[14] WANG R, SCHWORER M, CREMERS D. Stereo DSO: large-scale direct sparse visual odometry with stereo cameras[C]//Proceedings of the IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 3903-3911.
[15] HEO Y S, LEE K M, LEE S U. Robust stereo matching using adaptive normalized cross-correlation [J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2010, 33(4): 807-822.
[16] CVISIC I, CESIC J, MARKOVIC I, et al. SOFT-SLAM: computationally efficient stereo visual simultaneous localization and mapping for autonomous unmanned aerial vehicles [J]. Journal of Field Robotics, 2018, 35(4): 578-595.
[17] VANNE J, AHO E, HAMALAINEN T D, et al. A high-performance sum of absolute difference implementation for motion estimation [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2006, 16(7): 876-883.
[18] FORSTER C, PIZZOLI M, DAVIDE S. SVO: fast semi-direct monocular visual odometry[C]//IEEE International Conference on Robotics & Automation. Hong Kong: IEEE, 2014.
[19] ZHANG Z, DONG P, WANG J, et al. Improving S-MSCKF with variational bayesian adaptive nonlinear filter [J]. IEEE Sensors Journal, 2020, 20(16): 9437-9448.
[20] GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics: the kitti dataset [J]. The International Journal of Robotics Research, 2013, 32(11): 1231-1237.
[21] ENGEL J, STUCKLER J, CREMERS D. Large-scale direct SLAM with stereo cameras[C]//2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Hamburg: IEEE, 2015: 1935-1942.
[1] 徐伟锋, 蔡述庭, 熊晓明. 基于深度特征的单目视觉惯导里程计[J]. 广东工业大学学报, 2023, 40(01): 56-60,76.
[2] 王东, 黄瑞元, 李伟政, 黄之峰. 面向抓取任务的移动机器人停靠位置优化方法研究[J]. 广东工业大学学报, 2021, 38(06): 53-61.
[3] 汝少楠, 何元烈, 叶星余. 基于稀疏直接法闭环检测定位的视觉里程计[J]. 广东工业大学学报, 2021, 38(03): 48-54.
[4] 刘瑞雪, 曾碧, 汪明慧, 卢智亮. 一种基于高效边界探索的机器人自主建图方法[J]. 广东工业大学学报, 2020, 37(05): 38-45.
[5] 吴运雄, 曾碧. 基于深度强化学习的移动机器人轨迹跟踪和动态避障[J]. 广东工业大学学报, 2019, 36(01): 42-50.
[6] 汪盛民, 林伟, 曾碧. 未知环境下基于虚拟子目标的对立Q学习机器人路径规划[J]. 广东工业大学学报, 2019, 36(01): 51-56,62.
[7] 杨孟军, 苏成悦, 陈静, 张洁鑫. 基于卷积神经网络的视觉闭环检测研究[J]. 广东工业大学学报, 2018, 35(05): 31-37.
[8] 马晓东, 曾碧, 叶林锋. 基于BA的改进视觉/惯性融合定位算法[J]. 广东工业大学学报, 2017, 34(06): 32-36.
[9] 池鹏可, 苏成悦. 移动机器人中单目视觉里程计的研究[J]. 广东工业大学学报, 2017, 34(05): 40-44.
[10] 何瑞文; 陈少华; . 电力系统在线暂态稳定分析方法评述[J]. 广东工业大学学报, 2004, 21(1): 26-29.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!