A High-accuracy and Semi-dense Feature-based VSLAM System
SLAM (Simultaneous Localization and Mapping) technology is the basis of a mobile robot, also one of the most important conditions for robot intelligence. VSLAM (Visual Simultaneous Localization and Mapping) technology uses visual sensors to build incremental environment maps and simultaneously realize the self-positioning of a moving robot. Currently, feature-based VSLAM systems tend to build sparse maps, but sparse maps are not accurate enough for the applications of navigation and obstacle avoidance. Therefore, this paper proposed a high-accurate and semi-dense VSLAM system with a monocular camera. The proposed work used polar line searching and block matching modules to help feature-based VSLAM systems measure depth information more accurately. Experimental results show that the proposed semi-dense VSLAM system can reconstruct more dense point cloud maps, and the synchronization tracking trajectory accuracy of the system is 9.13% higher than that of ORB-SLAM2 system.
J. K. Makhubela, T. Zuva and O. Y. Agunbiade, “A Review on Vision Simultaneous Localization and Mapping (VS- LAM)”, 2018 International Conference on Intelligent and Innovative Computing Applications (ICONIC), pp.1-5, 2018.
B.D. Gouveia, D. Portugal and L. Marques, “Speeding up Rao-Blackwellized particle filter SLAM with a multi threaded architecture”, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp.1583-1588, 2014.
H. Strasdat, J. Montiel and A. J. Davison, “Scale drift-awarelarge scale monocular SLAM”, Proceedings of the Robotics: Science and Systems, pp.27-30, 2010.
B. Bescos, J. M. Fa ́cil and J. Civera, “DynaSLAM: Tracking, Mapping and Inpainting in Dynamic Scenes”, IEEE Robotics and Automation Letters. Vol.3, 2018.
J. Tang, L. Ericson and J. Folkesson, “Efficient Correspondence Prediction for Real-Time SLAM”, Computer Vision and Pattern Recognition, 2019.
R. Mur-Artal, J. D. Tards, “Orb-slam2: An open-source slam system for monocular stereo and rgb-d cameras”, IEEE Transactions on Robotics, Vol.33, no.5, pp.1255-1262, 2016.
A. Geiger, J. Ziegler and C. Stiller, “Stereoscan: dense 3d reconstruction in real-time”, Intelligent Vehic1es Symposium (IV), pp.963-968, 2014.
J. Engel, J. Sturm, D. Cremers, “Semi-dense visual odometry for a monocular camera”, International Conference on Computer Vision (lCCV), 2013.
E. Rublee, V. Rabaud and K. Konolige, “ORB: an efficient alternative to SIFT or SURF”, IEEE Intl. Conf. on Computer Vision (ICCV), Vol.13, 2011.
A. Hornung, K. M. Wurm, M. Bennewitz, “OctoMap: An efficient probabilistic 3D mapping framework based on octrees”, Autonomous Robots, Vol.34, no.3, pp.189-206, 2013.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).