1.西北工业大学 无人系统技术研究院,陕西 西安 710072
2.中国人民解放军军事科学院 国防科技创新研究院,北京 100850
Email: xuyuelei@nwpu.edu.cn
扫 描 看 全 文
龚坤,徐鑫,陈小庆等.弱纹理环境下融合点线特征的双目视觉SLAM[J].光学精密工程,
GONG Kun,Xu XIN,CHEN Xiaoqing,et al.Binocular vision SLAM with fused point and line features in weak texture environment[J].Optics and Precision Engineering,
龚坤,徐鑫,陈小庆等.弱纹理环境下融合点线特征的双目视觉SLAM[J].光学精密工程, DOI:10.37188/OPE.XXXXXXXX.0001
GONG Kun,Xu XIN,CHEN Xiaoqing,et al.Binocular vision SLAM with fused point and line features in weak texture environment[J].Optics and Precision Engineering, DOI:10.37188/OPE.XXXXXXXX.0001
针对室内弱纹理环境下基于点特征的视觉SLAM(Simultaneous Localization and Mapping)存在的轨迹漂移等问题,本文提出了一种融合点线特征的双目视觉SLAM系统,并对线特征的提取与匹配问题展开研究。为了提高线特征的质量,本文通过长度与梯度抑制、短线合并等方法,进一步改进了LSD(Line Segment Detector)线特征提取方法。同时,通过将匹配问题转换为优化问题,并利用几何约束构建代价函数,提出了一种基于几何约束的快速线段三角化方法。实验结果表明,本文所提方法在多个数据集上的表现都优于基于描述子的传统方法,尤其在室内弱纹理场景下,其平均匹配精度达到91.67%,平均匹配时间仅需7.4ms。基于此方法,本文提出的双目视觉SLAM系统在弱纹理数据集上与已有算法ORBSLAM2,PL-SLAM的定位误差分别为1.24 m,7.49 m,3.67 m,定位精度优于现有算法。
In response to the issues of trajectory drift in visual Simultaneous Localization and Mapping (SLAM) based on point features in indoor environments with weak textures, this paper proposes a binocular visual SLAM system that integrates point and line features. The paper focuses on the extraction and matching of line features in binocular visual SLAM. To enhance the quality of line features, an improved line feature extraction method based on the Line Segment Detector (LSD) algorithm is introduced, incorporating techniques such as length and gradient suppression, as well as short line merging. Moreover, the matching problem is transformed into an optimization problem, where a cost function is constructed based on geometric constraints. A fast line segment triangulation method using L1-norm sparse solution is presented to efficiently perform line matching and triangulation computations. Experimental results demonstrate that the proposed method outperforms traditional descriptor-based methods on multiple datasets, particularly in indoor environments with weak textures, achieving an average matching accuracy of 91.67% with an average matching time of only 7.4ms. Utilizing this method, the binocular visual SLAM system proposed in this paper achieves positioning errors of 1.24m, 7.49m, and 3.67m on weak texture datasets, surpassing existing algorithms such as ORBSLAM2 and PL-SLAM in terms of positioning accuracy.
双目视觉线特征提取视觉SLAM特征匹配
binocular visionline featuresvision SLAMfeature matching
权美香,朴松昊,李国,视觉SLAM综述[J].智能系统学报,2016,11(6):9.
Quan Meixiang, Park Songhao, Li Guo Overview of Visual SLAM [J]. Journal of Intelligent Systems, 2016, 11 (6): 9. (in Chinese)
张裕,张越,张宁等.基于逆深度滤波的双目折反射全景相机动态SLAM系统[J].光学精密工程,2022,30(11):1282-1289. doi: 10.37188/ope.20223011.1282http://dx.doi.org/10.37188/ope.20223011.1282
Zhang Yu, Zhang Yue, Zhang Ning, et al. Dynamic SLAM system for binocular catadioptric panoramic camera based on inverse depth filtering [J]. Optical Precision Engineering, 2022, 30 (11): 1282-1289. (in Chinese). doi: 10.37188/ope.20223011.1282http://dx.doi.org/10.37188/ope.20223011.1282
赵良玉,金瑞,朱叶青,等. 基于点线特征融合的双目惯性SLAM算法[J]. 航空学报, 2022,43(3):15. doi: 10.7527/j.issn.1000-6893.2022.3.hkxb202203029http://dx.doi.org/10.7527/j.issn.1000-6893.2022.3.hkxb202203029
Zhao Liangyu, Jin Rui, Zhu Yeqing, et al Binocular Inertial SLAM Algorithm Based on Point Line Feature Fusion [J] Journal of Aviation, 2022, 43 (3): 15. (in Chinese). doi: 10.7527/j.issn.1000-6893.2022.3.hkxb202203029http://dx.doi.org/10.7527/j.issn.1000-6893.2022.3.hkxb202203029
周佳乐,朱兵,吴芝路.融合二维图像和三维点云的相机位姿估计[J].光学精密工程,2022,30(22):2901-2912. doi: 10.37188/ope.20223022.2901http://dx.doi.org/10.37188/ope.20223022.2901
Zhou Jiale, Zhu Bing, Wu Zhilu. Camera pose estimation using fusion of 2D images and 3D point clouds [J]. Optical Precision Engineering, 2022, 30 (22): 2901-2912. (in Chinese). doi: 10.37188/ope.20223022.2901http://dx.doi.org/10.37188/ope.20223022.2901
贾晓雪,赵冬青,张乐添等.基于自适应惯导辅助特征匹配的视觉SLAM算法[J].光学精密工程,2023,31(05):621-630. doi: 10.37188/ope.20233105.0621http://dx.doi.org/10.37188/ope.20233105.0621
Jia Xiaoxue, Zhao Dongqing, Zhang Letian, et al. Visual SLAM algorithm based on adaptive inertial navigation assisted feature matching [J]. Optical Precision Engineering, 2023, 31 (05): 621-630.(in Chinese). doi: 10.37188/ope.20233105.0621http://dx.doi.org/10.37188/ope.20233105.0621
李海丰,胡遵河,陈新伟.PLP-SLAM:基于点、线、面特征融合的视觉SLAM方法[J].机器人,2017,39(02):214-220+229.
Li Haifeng, Hu Zunhe, Chen Xinwei. PLP-SLAM: A Visual SLAM Method Based on Point, Line, and Area Feature Fusion [J]. Robotics, 2017, 39 (02): 214-220+229. (in Chinese)
Mur-Artal R , Montiel J M M , Tardos J D .ORB-SLAM: A Versatile and Accurate Monocular SLAM System[J].IEEE Transactions on Robotics, 2015, 31(5). doi: 10.1109/tro.2015.2463671http://dx.doi.org/10.1109/tro.2015.2463671
Mur-Artal and JR.. D. Tardos, " ORB-SLAM2: An Open-Source SLAM System forMonocular, Stereo, and RGB-D Cameras, " in IEEE Transactions on Robotics, vol.33, no. 5, pp.1255-1262,Oct. 2017. doi: 10.1109/tro.2017.2705103http://dx.doi.org/10.1109/tro.2017.2705103
Campos C, Elvira R, Rodriguez J J G, et al. Orb-slam3: An accurate open-sourcelibrary for visual, visual-inertial, and multimap slam[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890. doi: 10.1109/tro.2021.3075644http://dx.doi.org/10.1109/tro.2021.3075644
Qin T,Li P,Shen S. Vins-mono: A robust and versatile monocularvisual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4); 1004-1020. doi: 10.1109/tro.2018.2853729http://dx.doi.org/10.1109/tro.2018.2853729
ENGELJ, KOLTUN V, CREMERS D. Direct sparse odo-metry [J]. IEEE Transactions on Pattern Analysis and Ma-chine Intelligence, 2017, 40(3): 611-625. doi: 10.1109/tpami.2017.2658577http://dx.doi.org/10.1109/tpami.2017.2658577
Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry[C]//2014 IEEE international conference on robotics and automation (ICRA). IEEE, 2014: 15-22. doi: 10.1109/icra.2014.6906584http://dx.doi.org/10.1109/icra.2014.6906584
Yunus R, Li Y, Tombari F. Manhattanslam: Robust planar tracking and mapping leveraging mixture of manhattan frames[C]//2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021: 6687-6693. doi: 10.1109/icra48506.2021.9562030http://dx.doi.org/10.1109/icra48506.2021.9562030
Wei H, Tang F, Xu Z, et al. A point-line vio system with novel feature hybrids and with novel line predicting-matching[J]. IEEE Robotics and Automation Letters, 2021, 6(4): 8681-8688. doi: 10.1109/lra.2021.3113987http://dx.doi.org/10.1109/lra.2021.3113987
Gomez-Ojeda R, Moreno F A, Zuniga-Noël D, et al. PL-SLAM: A stereo SLAM system through the combination of points and line segments[J]. IEEE Transactions on Robotics, 2019, 35(3): 734-746. doi: 10.1109/tro.2019.2899783http://dx.doi.org/10.1109/tro.2019.2899783
Company-Corcoles J P, Garcia-Fidalgo E, Ortiz A. MSC-VO: Exploiting manhattan and structural constraints for visual odometry[J]. IEEE Robotics and Automation Letters, 2022, 7(2): 2803-2810. doi: 10.1109/lra.2022.3142900http://dx.doi.org/10.1109/lra.2022.3142900
Rafael Grompone von Gioi, JakubowiczJérémie, MorelJean-Michel, and RandallGregory, LSD: a Line Segment Detector[J], Image Processing On Line, 2 (2012), pp. 35–55. doi: 10.5201/ipol.2012.gjmr-lsdhttp://dx.doi.org/10.5201/ipol.2012.gjmr-lsd
HE Y J,ZHAO J,GUO Y,et al. PL-VIO: Tightly-cou-pled monocular visua-inertial odometry using point andline features[J]. Sensors, 2018, 18(4): 1159. doi: 10.3390/s18041159http://dx.doi.org/10.3390/s18041159
Gomez-Ojeda R, Briales J, Gonzalez-Jimenez J. PL-SVO: Semi-direct monocular visual odometry by combining points and line segments[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4211-4216. doi: 10.1109/iros.2016.7759620http://dx.doi.org/10.1109/iros.2016.7759620
Wei H, Tang F, Zhang C, et al. Highly efficient line segment tracking with an imu-klt prediction and a convex geometric distance minimization[C]. 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021: 3999-4005. doi: 10.1109/icra48506.2021.9560931http://dx.doi.org/10.1109/icra48506.2021.9560931
Zhang L, Koch R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency[J]. Journal of visual communication and image representation, 2013, 24(7): 794-805. doi: 10.1016/j.jvcir.2013.05.006http://dx.doi.org/10.1016/j.jvcir.2013.05.006
Kim P, Coltin B, Kim H J. Low-drift visual odometry in structured environments by decoupling rotational and translational motion[C]. 2018 IEEE international conference on Robotics and automation (ICRA). IEEE, 2018: 7247-7253. doi: 10.1109/icra.2018.8463207http://dx.doi.org/10.1109/icra.2018.8463207
Zhang T, Liu C, Li J, et al. A new visual inertial simultaneous localization and mapping (SLAM) algorithm based on point and line features[J]. Drones, 2022, 6(1): 23. doi: 10.3390/drones6010023http://dx.doi.org/10.3390/drones6010023
Burri M, Nikolic J, Gohl P, et al. The EuRoC micro aerial vehicle datasets[J]. The International Journal of Robotics Research, 2016, 35(10): 1157-1163. doi: 10.1177/0278364915620033http://dx.doi.org/10.1177/0278364915620033
Menze M, Geiger A. Object scene flow for autonomous vehicles[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 3061-3070. doi: 10.1109/cvpr.2015.7298925http://dx.doi.org/10.1109/cvpr.2015.7298925
0
浏览量
0
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构