广东工业大学学报 ›› 2017, Vol. 34 ›› Issue (05): 40-44.doi: 10.12052/gdutxb.160122

• 综合研究 • 上一篇    下一篇

移动机器人中单目视觉里程计的研究

池鹏可, 苏成悦   

  1. 广东工业大学 物理与光电工程学院, 广东 广州 510006
  • 收稿日期:2016-10-10 出版日期:2017-09-09 发布日期:2017-07-10
  • 通信作者: 苏成悦(1961-),男,教授,博士,主要研究方向为应用物理、机器视觉.E-mail:scy.gdut@163.com E-mail:scy.gdut@163.com
  • 作者简介:池鹏可(1991-),男,硕士研究生,主要研究方向为机器人控制技术、机器视觉.
  • 基金资助:
    广东省信息产业发展专项现代信息服务业项目(2150510)

A Research on Monocular Visual Odometry for Mobile Robots

Chi Peng-ke, Su Cheng-yue   

  1. School of Physics and Optoelectronic Engineering, Guangdong University of Technology, Guangzhou 510006, China
  • Received:2016-10-10 Online:2017-09-09 Published:2017-07-10

摘要: 将特征法与直接法相结合用于实现移动机器人的单目视觉里程计.采用Shi-Tomasi检测算法确保FAST检测算法提取的特征点的稳定性,通过四叉树算法使特征点分布均匀,并使用金字塔KLT算法跟踪下一帧图像的特征点.通过归一化直接线性变换DLT算法计算的单应矩阵重构相关位姿,利用最小化光度测量误差方法对特征点图像块进行直接匹配来估计移动机器人的运动位姿,并利用g2o库优化局部连续的运动位姿信息,提高运动估计的鲁棒性.通过TurtleBot移动机器人在室内环境下进行了实验验证,利用KITTI数据集测试本文设计的单目视觉里程计,实验结果表明本文所提方法的有效性和实用性.

关键词: 直接法, 单目视觉里程计, 四叉树, 单应矩阵, 光度测量误差

Abstract: A mobile robot monocular visual odometry based on feature-based method and direct method is proposed. The Shi-Tomasi detector algorithm is used to improve the stability of feature points extracted by FAST algorithm, and the quadtree algorithm is used to ensure uniform distribution of feature points, which may be beneficial to using pyramid KLT algorithm to track these feature points of the next frame image. And then the relative pose of camera is recovered by the homography matrix that is computed by the normalized direct linear transformation algorithm. Finally, the photometric error between image patches is minimized to estimate the movement pose of the mobile robot and the optimization framework g2o is used to optimize local continuous position and orientation information that can improve the robustness of motion estimation. The proposed monocular visual odometry is tested on the TurtleBot platform in indoor room and the KITTI dataset. The experimental results have verified the effectiveness and practicability of the proposed approach.

Key words: direct method, monocular visual odometry, quadtree, homography matrix, photometric error

中图分类号: 

  • TP242
[1] 高云峰, 李伟超, 李建辉. 室内移动机器人视觉里程计研究[J]. 传感器与微系统, 2012, 31(2):26-29.GAO Y F, LI W C, LI J H. Research on visual odometry for indoor mobile robots[J]. Transducer and Microsystem Technologies, 2012, 31(2):26-29. [2] SCARAMUZZA D, FRAUNDORFER F. Visual odometry:Part I-the first 30 years and fundamentals[J]. IEEE Robotics & Automation Magazine, 2011, 18(4):80-92. [3] MAIMONE M, CHENG Y, MATTHIES L. Two years of visual odometry on the mars exploration rovers[J]. Journal of Field Robotics, 2007, 24(3):169-186. [4] WU K, DI K, SUN X, et al. Enhanced monocular visual odometry integrated with laser distance meter for astronaut navigation[J]. Sensors, 2014, 14(3):4981-5003. [5] 曾碧, 林展鹏, 邓杰航. 自主移动机器人走廊识别算法研究与改进[J]. 广东工业大学学报, 2016, 33(5):9-15.ZENG B, LIN Z P, DENG J H. Algorithm research on recognition and improvement for corridor of autonomous mobile robot[J]. Journal of Guangdong University of Technology, 2016, 33(5):9-15. [6] 孙伟, 钟映春, 谭志, 等. 多特征融合的室内场景分类研究[J]. 广东工业大学学报, 2015, 32(1):75-79.SUN W, ZHONG Y C, TAN Z, et al. Research on muti-featured fusion for indoor scene recognition[J]. Journal of Guangdong University of Technology, 2015, 32(1):75-79. [7] ENGEL J, SCHÖPS T, U D. LSD-SLAM:Large-scale direct monocular SLAM[C]//Computer Vision, European Conference. Switzerland:Springer, 2014:834-849. [8] NISTER D, NARODITSKY O, BERGEN J. Visual odometry[C]//Computer Vision and Pattern Recognition, IEEE Computer Society Conference. Washington D C:IEEE, 2004:652-659. [9] HENRY P, KRAININ M, HERBST E, et al. RGB-D mapping:Using Kinect-style depth cameras for dense 3D modeling of indoor environments[J]. The International Journal of Robotics Research, 2012, 31(5):647-663. [10] ZIENKIEWICZ J, LUKIERSKI R, DAVISON A. Dense, auto-calibrating visual odometry from a downward-looking camera[C]//British Machine vision Conference. Bristol:British Machine Vision Association, 2013:94. 1-94. 11. [11] 张毅, 童学容, 罗元. 一种改进SURF算法的单目视觉里程计[J]. 重庆邮电大学学报(自然科学版), 2014, 26(3):390-396.ZHANG Y, TONG X R, LUO Y. A novel monocular visual odometry method based on improved SURF algorithm[J]. Journal of Chongqing University of Posts and Telecommunications (Natural Science Edition), 2014, 26(3):390-396. [12] ROSTEN E, PORTER R, DRUMMOND T. Faster and better:a machine learning approach to corner detection[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2010, 32(1):105-119. [13] 姜静, 曹彦. 基于四叉树和特征融合的图像特征提取的研究[J]. 洛阳师范学院学报, 2014, 33(11):55-56,59.JIANG J, CAO Y. Research of image property extraction based on quadtree and property fusion[J]. Journal of Luoyang Normal University, 2014, 33(11):55-56,59. [14] BIRCHFIELD S. Derivation of Kanade-Lucas-Tomasi tracking equation[J]. Unpublished Notes, 1997, 44(5):1811-1843. [15] 孟繁雪. 非线性最小二乘问题的混合算法[D]. 上海:上海交通大学数学科学学院, 2011. [16] FAUGERAS O D, LUSTMAN F. Motion and structure from motion in a piecewise planar environment[J]. International Journal of Pattern Recognition & Artificial Intelligence, 1988, 2(3):485-508. [17] KUEMMERLE R, GRISETTI G, STRASDAT H, et al. g2o:A general framework for graph optimization[C]//Robotics and Automation, IEEE International Conference. Shanghai:IEEE, 2011:3607-3613. [18] BUTT R A, ALI S U. Semantic mapping and motion planning with turtlebot roomba[J]. Iop Conference Series:Materials Science & Engineering, 2013, 51(51):199-210. [19] GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics:the KITTI dataset[J]. International Journal of Robotics Research, 2013, 32(11):1231-1237.
[1] 叶培楚, 李东, 章云. 基于双目强约束的直接稀疏视觉里程计[J]. 广东工业大学学报, 2021, 38(04): 65-70.
[2] 汝少楠, 何元烈, 叶星余. 基于稀疏直接法闭环检测定位的视觉里程计[J]. 广东工业大学学报, 2021, 38(03): 48-54.
[3] 马晓东, 曾碧, 叶林锋. 基于BA的改进视觉/惯性融合定位算法[J]. 广东工业大学学报, 2017, 34(06): 32-36.
[4] 何瑞文; 陈少华; . 电力系统在线暂态稳定分析方法评述[J]. 广东工业大学学报, 2004, 21(1): 26-29.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!