Journal of Guangdong University of Technology ›› 2017, Vol. 34 ›› Issue (05): 40-44.doi: 10.12052/gdutxb.160122

Previous Articles     Next Articles

A Research on Monocular Visual Odometry for Mobile Robots

Chi Peng-ke, Su Cheng-yue   

  1. School of Physics and Optoelectronic Engineering, Guangdong University of Technology, Guangzhou 510006, China
  • Received:2016-10-10 Online:2017-09-09 Published:2017-07-10

Abstract: A mobile robot monocular visual odometry based on feature-based method and direct method is proposed. The Shi-Tomasi detector algorithm is used to improve the stability of feature points extracted by FAST algorithm, and the quadtree algorithm is used to ensure uniform distribution of feature points, which may be beneficial to using pyramid KLT algorithm to track these feature points of the next frame image. And then the relative pose of camera is recovered by the homography matrix that is computed by the normalized direct linear transformation algorithm. Finally, the photometric error between image patches is minimized to estimate the movement pose of the mobile robot and the optimization framework g2o is used to optimize local continuous position and orientation information that can improve the robustness of motion estimation. The proposed monocular visual odometry is tested on the TurtleBot platform in indoor room and the KITTI dataset. The experimental results have verified the effectiveness and practicability of the proposed approach.

Key words: direct method, monocular visual odometry, quadtree, homography matrix, photometric error

CLC Number: 

  • TP242
[1] 高云峰, 李伟超, 李建辉. 室内移动机器人视觉里程计研究[J]. 传感器与微系统, 2012, 31(2):26-29.GAO Y F, LI W C, LI J H. Research on visual odometry for indoor mobile robots[J]. Transducer and Microsystem Technologies, 2012, 31(2):26-29. [2] SCARAMUZZA D, FRAUNDORFER F. Visual odometry:Part I-the first 30 years and fundamentals[J]. IEEE Robotics & Automation Magazine, 2011, 18(4):80-92. [3] MAIMONE M, CHENG Y, MATTHIES L. Two years of visual odometry on the mars exploration rovers[J]. Journal of Field Robotics, 2007, 24(3):169-186. [4] WU K, DI K, SUN X, et al. Enhanced monocular visual odometry integrated with laser distance meter for astronaut navigation[J]. Sensors, 2014, 14(3):4981-5003. [5] 曾碧, 林展鹏, 邓杰航. 自主移动机器人走廊识别算法研究与改进[J]. 广东工业大学学报, 2016, 33(5):9-15.ZENG B, LIN Z P, DENG J H. Algorithm research on recognition and improvement for corridor of autonomous mobile robot[J]. Journal of Guangdong University of Technology, 2016, 33(5):9-15. [6] 孙伟, 钟映春, 谭志, 等. 多特征融合的室内场景分类研究[J]. 广东工业大学学报, 2015, 32(1):75-79.SUN W, ZHONG Y C, TAN Z, et al. Research on muti-featured fusion for indoor scene recognition[J]. Journal of Guangdong University of Technology, 2015, 32(1):75-79. [7] ENGEL J, SCHÖPS T, U D. LSD-SLAM:Large-scale direct monocular SLAM[C]//Computer Vision, European Conference. Switzerland:Springer, 2014:834-849. [8] NISTER D, NARODITSKY O, BERGEN J. Visual odometry[C]//Computer Vision and Pattern Recognition, IEEE Computer Society Conference. Washington D C:IEEE, 2004:652-659. [9] HENRY P, KRAININ M, HERBST E, et al. RGB-D mapping:Using Kinect-style depth cameras for dense 3D modeling of indoor environments[J]. The International Journal of Robotics Research, 2012, 31(5):647-663. [10] ZIENKIEWICZ J, LUKIERSKI R, DAVISON A. Dense, auto-calibrating visual odometry from a downward-looking camera[C]//British Machine vision Conference. Bristol:British Machine Vision Association, 2013:94. 1-94. 11. [11] 张毅, 童学容, 罗元. 一种改进SURF算法的单目视觉里程计[J]. 重庆邮电大学学报(自然科学版), 2014, 26(3):390-396.ZHANG Y, TONG X R, LUO Y. A novel monocular visual odometry method based on improved SURF algorithm[J]. Journal of Chongqing University of Posts and Telecommunications (Natural Science Edition), 2014, 26(3):390-396. [12] ROSTEN E, PORTER R, DRUMMOND T. Faster and better:a machine learning approach to corner detection[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2010, 32(1):105-119. [13] 姜静, 曹彦. 基于四叉树和特征融合的图像特征提取的研究[J]. 洛阳师范学院学报, 2014, 33(11):55-56,59.JIANG J, CAO Y. Research of image property extraction based on quadtree and property fusion[J]. Journal of Luoyang Normal University, 2014, 33(11):55-56,59. [14] BIRCHFIELD S. Derivation of Kanade-Lucas-Tomasi tracking equation[J]. Unpublished Notes, 1997, 44(5):1811-1843. [15] 孟繁雪. 非线性最小二乘问题的混合算法[D]. 上海:上海交通大学数学科学学院, 2011. [16] FAUGERAS O D, LUSTMAN F. Motion and structure from motion in a piecewise planar environment[J]. International Journal of Pattern Recognition & Artificial Intelligence, 1988, 2(3):485-508. [17] KUEMMERLE R, GRISETTI G, STRASDAT H, et al. g2o:A general framework for graph optimization[C]//Robotics and Automation, IEEE International Conference. Shanghai:IEEE, 2011:3607-3613. [18] BUTT R A, ALI S U. Semantic mapping and motion planning with turtlebot roomba[J]. Iop Conference Series:Materials Science & Engineering, 2013, 51(51):199-210. [19] GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics:the KITTI dataset[J]. International Journal of Robotics Research, 2013, 32(11):1231-1237.
[1] Ye Pei-chu, Li Dong, Zhang Yun. Direct Sparse Visual Odometer Based on Enhanced Stereo-Camera Constraints [J]. Journal of Guangdong University of Technology, 2021, 38(04): 65-70.
[2] Ru Shao-nan, He Yuan-lie, Ye Xing-yu. Visual Odometry Based on Sparse Direct Method Loop-Closure Detection [J]. Journal of Guangdong University of Technology, 2021, 38(03): 48-54.
[3] Ma Xiao-dong, Zeng Bi, Ye Lin-feng. An Improved Visual Odometry/SINS Integrated Localization Algorithm Based on BA [J]. Journal of Guangdong University of Technology, 2017, 34(06): 32-36.
[4] HE Rui-wen,CHEN Shao-hua . A Review of On-line Methods for Power System Transient Stability Analysis [J]. Journal of Guangdong University of Technology, 2004, 21(1): 26-29.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!