Welcome to Journal of Beijing Institute of Technology
Volume 28Issue 1
.
Turn off MathJax
Article Contents
Xinliang Zhong, Xiao Luo, Jiaheng Zhao, Yutong Huang. Semi-Direct Visual Odometry and Mapping System with RGB-D Camera[J]. JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY, 2019, 28(1): 83-93. doi: 10.15918/j.jbit1004-0579.17149
Citation: Xinliang Zhong, Xiao Luo, Jiaheng Zhao, Yutong Huang. Semi-Direct Visual Odometry and Mapping System with RGB-D Camera[J].JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY, 2019, 28(1): 83-93.doi:10.15918/j.jbit1004-0579.17149

Semi-Direct Visual Odometry and Mapping System with RGB-D Camera

doi:10.15918/j.jbit1004-0579.17149
  • Received Date:2017-10-22
  • In this paper a semi-direct visual odometry and mapping system is proposed with a RGB-D camera, which combines the merits of both feature based and direct based methods. The presented system directly estimates the camera motion of two consecutive RGB-D frames by minimizing the photometric error. To permit outliers and noise, a robust sensor model built upon the t-distribution and an error function mixing depth and photometric errors are used to enhance the accuracy and robustness. Local graph optimization based on key frames is used to reduce the accumulative error and refine the local map. The loop closure detection method, which combines the appearance similarity method and spatial location constraints method, increases the speed of detection. Experimental results demonstrate that the proposed approach achieves higher accuracy on the motion estimation and environment reconstruction compared to the other state-of-the-art methods. Moreover, the proposed approach works in real-time on a laptop without a GPU, which makes it attractive for robots equipped with limited computational resources.
  • loading
  • [1]
    Shen Shaojie, Michael N, Kumar V. Autonomous multi-floor indoor navigation with a computationally constrained MAV[C]//IEEE International Conference on Robotics and Automation, 2011:20-25.
    [2]
    Beul M, Krombach N, Zhong Y, et al. A high-performance MAV for autonomous navigation in complex 3D environments[C]//International Conference on Unmanned Aircraft Systems, 2015:1241-1250.
    [3]
    Blo sch M, Weiss S, Scaramuzza D, et al. Vision based MAV navigation in unknown and unstructured environments[C]//IEEE International Conference on Robotics and Automation, 2010:21-28.
    [4]
    Weiss S, Achtelik M W, Lynen S, et al. Monocular vision for long-term micro aerial vehicle state estimation:a compendium[J]. Journal of Field Robotics,2013, 30(5):803-831.
    [5]
    Laurent K, Chli M, Siegwart R. Robust real-time visual odometry with a single camera and an IMU[C]//British Machine Vision Conference, 2011.
    [6]
    Jakob E, Sturm J, Cremers D. Accurate figure flying with a quadrocopter using onboard visual and inertial sensing[Z]. IMU, 2012.
    [7]
    Kerl C, Sturm J, Cremers D. Robust odometry estimation for RGB-D cameras[C]//IEEE International Conference on Robotics and Automation, 2013:3748-3754.
    [8]
    Tykkälä Tommi, Audras C, Comport A I. Direct iterative closest point for real-time visual odometry[C]//IEEE International Conference on Computer Vision Workshops, ICCV 2011 Workshops, Barcelona, Spain, November DBLP, 2011:2050-2056.
    [9]
    Kerl C, Sturm J, Cremers D. Dense visual SLAM for RGB-D cameras[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014:2100-2106.
    [10]
    Christian F, Pizzoli M, Scaramuzza D. SVO:fast semi-direct monocular visual odometry[C]//IEEE International Conference on Robotics and Automation, 2014:15-22.
    [11]
    Raul Mur-Artal, Tardós J D. Fast relocalisation and loop closing in keyframe-based SLAM[C]//IEEE International Conference on Robotics and Automation, 2014:846-853.
    [12]
    Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2):91-110.
    [13]
    Bay H, Tuytelaars T, Gool L V. SURF:speeded up robust features[J]. Computer Vision & Image Understanding, 2006, 110(3):404-417.
    [14]
    Rublee E, Rabaud V, Konolige K, et al. ORB:an efficient alternative to SIFT or SURF[C]//IEEE International Conference on Computer Vision, 2012:2564-2571.
    [15]
    Endres F, Hess J, Sturm J, et al. 3-D mapping with an RGB-D camera[J]. IEEE Transactions on Robotics, 2017, 30(1):177-187.
    [16]
    Mur-Artal R, Tardos J D. ORB-SLAM2:an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2016(99):1-8.
    [17]
    Triggs B, Mclauchlan P F, Hartley R I, et al. Bundle adjustment-a modern synthesis[C]//International Workshop on Vision Algorithms:Theory and Practice, Springer-Verlag, 1999:298-372.
    [18]
    Fischler M A, Bolles R C. Random sample consensus:a paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM, 1981:24(6):381-395.
    [19]
    Henry P, Krainin M, Herbst E, et al. RGB-D mapping:using depth cameras for dense 3D modeling of indoor environments[M]//Experimental Robotics. Heidelberg,Berlin:Springer, 2014:647-663.
    [20]
    Koch R. Dynamic 3-D scene analysis through synthesis feedback control[J]. Pattern Analysis & Machine Intelligence IEEE Transactions on, 1993, 15(6):556-568.
    [21]
    Lucas B D, Kanade T. An iterative image registration technique with an application to stereo vision[C]//International Joint Conference on Artificial Intelligence, Morgan Kaufmann Publishers Inc, 1981:674-679.
    [22]
    Newcombe R A, Izadi S, Hilliges O, et al. KinectFusion:real-time dense surface mapping and tracking[C]//IEEE International Symposium on Mixed and Augmented Reality, 2012:127-136.
    [23]
    Newcombe R A, Lovegrove S J, Davison A J. DTAM:dense tracking and mapping in real-time[C]//IEEE International Conference on Computer Vision, 2011:2320-2327.
    [24]
    Kerl C, Sturm J, Cremers D. Robust odometry estimation for RGB-D cameras[C]//IEEE International Conference on Robotics and Automation, 2013:3748-3754.
    [25]
    Engel J, Sch ps T, Cremers D. LSD-SLAM:large-scale direct monocular SLAM[C]//European Conference on Computer Vision, Springer, Cham, 2014:834-849.
    [26]
    Engel J, Sturm J, Cremers D. Semi-dense visual odometry for a monocular camera[C]//IEEE International Conference on Computer Vision, IEEE Computer Society, 2013:1449-1456.
    [27]
    Bu S, Zhao Y, Wan G, et al. Semi-direct tracking and mapping with RGB-D camera for MAV[J]. Multimedia Tools & Applications, 2017, 76(3):4445-4469.
    [28]
    Kümmerle R, Grisetti G, Strasdat H, et al. G2o:a general framework for graph optimization[C]//IEEE International Conference on Robotics and Automation, 2011:3607-3613.
    [29]
    Angeli A, Filliat D, Doncieux S, et al. Fast and incremental method for loop-closure detection using bags of visual words[J]. IEEE Transactions on Robotics, 2008, 24(5):1027-1037.
    [30]
    Cummins M, Newman P. FAB-MAP:probabilistic localization and mapping in the space of appearance[J]. International Journal of Robotics Research, 2008, 27(6):647-665.
    [31]
    Labbé M, Michaud F. Appearance-based loop closure detection for online large-scale and long-term operation[J]. IEEE Transactions on Robotics, 2013, 29(3):734-745.
    [32]
    Strasdat H, Davison A J, Montiel J M M, et al. Double window optimisation for constant time visual SLAM[C]//International Conference on Computer Vision, IEEE Computer Society, 2011:2352-2359.
    [33]
    Endres F, Hess J, Sturm J, et al. 3-D mapping with an RGB-D camera[J]. IEEE Transactions on Robotics, 2017, 30(1):177-187.
    [34]
    Sturm J, Engelhard N, Endres F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012:573-580.
  • 加载中

Catalog

    通讯作者:陈斌, bchen63@163.com
    • 1.

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (468) PDF downloads(379) Cited by()
    Proportional views
    Related

    /

      Return
      Return
        Baidu
        map