1.西北工业大学 无人系统技术研究院,陕西 西安 710072
2.中国人民解放军军事科学院 国防科技创新研究院,北京 100850
[ "龚 坤(1999-),男,湖南益阳人,硕士研究生,2017年于西北农林科技大学获得学士学位,主要从事视觉SLAM的研究。E-mail:gongkun@ mail.nwpu.edu.cn" ]
[ "许悦雷(1975-),男,教授,博士生导师,主要从事无人系统感知、视觉导航方面的研究。E-mal: xuyuelei@nwpu.edu.cn" ]
扫 描 看 全 文
龚坤,徐鑫,陈小庆等.弱纹理环境下融合点线特征的双目视觉同步定位与建图[J].光学精密工程,2024,32(05):752-763.
GONG Kun,XU Xin,CHEN Xiaoqing,et al.Binocular vision SLAM with fused point and line features in weak texture environment[J].Optics and Precision Engineering,2024,32(05):752-763.
龚坤,徐鑫,陈小庆等.弱纹理环境下融合点线特征的双目视觉同步定位与建图[J].光学精密工程,2024,32(05):752-763. DOI: 10.37188/OPE.20243205.0752.
GONG Kun,XU Xin,CHEN Xiaoqing,et al.Binocular vision SLAM with fused point and line features in weak texture environment[J].Optics and Precision Engineering,2024,32(05):752-763. DOI: 10.37188/OPE.20243205.0752.
针对室内弱纹理环境下基于点特征的视觉同步定位与建图(Simultaneous Localization and Mapping, SLAM)存在的轨迹漂移等问题,提出了一种融合点线特征的双目视觉SLAM系统,并对线特征的提取与匹配问题展开研究。为了提高线特征的质量,通过长度与梯度抑制、短线合并等方法,进一步改进LSD(Line Segment Detector)线特征提取方法。同时,通过将匹配问题转换为优化问题,并利用几何约束构建代价函数,提出了一种基于几何约束的快速线段三角化方法。实验结果表明,本文所提方法在多个数据集上的表现都优于基于描述子的传统方法,尤其在室内弱纹理场景下,其平均匹配精度达到91.67%,平均匹配时间仅需7.4 ms。基于此方法,双目视觉SLAM系统在弱纹理数据集上与已有算法ORBSLAM2,PL-SLAM的定位误差分别为1.24,7.49,3.67 m,定位精度优于现有算法。
Addressing the challenge of trajectory drift in visual Simultaneous Localization and Mapping (SLAM) due to point features in texture-deficient indoor settings
this study introduces a binocular visual SLAM system that combines point and line features. It emphasizes the extraction and matching of line features within binocular visual SLAM. An enhanced line feature extraction technique
based on the Line Segment Detector (LSD) algorithm
is proposed. This includes improvements like length and gradient filtering
and the amalgamation of short lines. Additionally
the matching issue is redefined as an optimization challenge
creating a cost function based on geometric constraints. A novel
efficient line segment triangulation approach
leveraging the L1-norm sparse solution
is developed for effective line matching and triangulation. Experimental evidence shows that our method surpasses traditional descriptor-based approaches across various datasets
especially in texture-sparse indoor areas
achieving a remarkable average matching accuracy of 91.67% and a swift average matching time of 7.4 ms. Employing this technique
our binocular visual SLAM system records positioning errors of 1.24
7.49
and 3.67 m on texture-sparse datasets
outperforming leading algorithms like ORBSLAM2 and PL-SLAM in positioning precision.
双目视觉线特征提取视觉同步定位与建图特征匹配
binocular visionline featuresvision simultaneous localization and mappingfeature matching
权美香, 朴松昊, 李国. 视觉SLAM综述[J]. 智能系统学报, 2016, 11(6): 768-776. doi: 10.11992/tis.201607026http://dx.doi.org/10.11992/tis.201607026
QUAN M X, PIAO S H, LI G. Overview of Visual SLAM [J]. CAAI Transactions on Intelligent Systems, 2016, 11(6): 768-776.(in Chinese). doi: 10.11992/tis.201607026http://dx.doi.org/10.11992/tis.201607026
张裕, 张越, 张宁, 等. 基于逆深度滤波的双目折反射全景相机动态SLAM系统[J]. 光学 精密工程, 2022, 30(11): 1282-1289. doi: 10.37188/ope.20223011.1282http://dx.doi.org/10.37188/ope.20223011.1282
ZHANG Y, ZHANG Y, ZHANG N, et al. Dynamic SLAM of binocular catadioptric panoramic camera based on inverse depth filter[J]. Opt. Precision Eng., 2022, 30(11): 1282-1289.(in Chinese). doi: 10.37188/ope.20223011.1282http://dx.doi.org/10.37188/ope.20223011.1282
赵良玉, 金瑞, 朱叶青, 等. 基于点线特征融合的双目惯性SLAM算法[J]. 航空学报, 2022, 43(3): 355-369. doi: 10.7527/j.issn.1000-6893.2022.3.hkxb202203029http://dx.doi.org/10.7527/j.issn.1000-6893.2022.3.hkxb202203029
ZHAO L Y, JIN R, ZHU Y Q, et al. Stereo visual-inertial SLAM algorithm based on merge of point and line features[J]. Acta Aeronautica et Astronautica Sinica, 2022, 43(3): 355-369.(in Chinese). doi: 10.7527/j.issn.1000-6893.2022.3.hkxb202203029http://dx.doi.org/10.7527/j.issn.1000-6893.2022.3.hkxb202203029
周佳乐, 朱兵, 吴芝路. 融合二维图像和三维点云的相机位姿估计[J]. 光学 精密工程, 2022, 30(22): 2901-2912. doi: 10.37188/ope.20223022.2901http://dx.doi.org/10.37188/ope.20223022.2901
ZHOU J L, ZHU B, WU ZH L. Camera pose estimation based on 2D image and 3D point cloud fusion[J]. Opt. Precision Eng., 2022, 30(22): 2901-2912.(in Chinese). doi: 10.37188/ope.20223022.2901http://dx.doi.org/10.37188/ope.20223022.2901
贾晓雪, 赵冬青, 张乐添, 等. 基于自适应惯导辅助特征匹配的视觉SLAM算法[J]. 光学 精密工程, 2023, 31(5): 621-630. doi: 10.37188/OPE.20233105.0621http://dx.doi.org/10.37188/OPE.20233105.0621
JIA X X, ZHAO D Q, ZHANG L T, et al. A visual SLAM algorithm based on adaptive inertial navigation assistant feature matching[J]. Opt. Precision Eng., 2023, 31(5): 621-630.(in Chinese). doi: 10.37188/OPE.20233105.0621http://dx.doi.org/10.37188/OPE.20233105.0621
李海丰, 胡遵河, 陈新伟. PLP-SLAM: 基于点、线、面特征融合的视觉SLAM方法[J]. 机器人, 2017, 39(2): 214-220, 229. doi: 10.13973/j.cnki.robot.2017.0214http://dx.doi.org/10.13973/j.cnki.robot.2017.0214
LI H F, HU Z H, CHEN X W. PLP-SLAM: a visual SLAM method based on point-line-plane feature fusion[J]. Robot, 2017, 39(2): 214-220, 229.(in Chinese). doi: 10.13973/j.cnki.robot.2017.0214http://dx.doi.org/10.13973/j.cnki.robot.2017.0214
MUR-ARTAL R, MONTIEL J M M, TARDOS J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163. doi: 10.1109/tro.2015.2463671http://dx.doi.org/10.1109/tro.2015.2463671
MUR-ARTAL R, TARDÓS J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262. doi: 10.1109/tro.2017.2705103http://dx.doi.org/10.1109/tro.2017.2705103
CAMPOS C, ELVIRA R, RODRIGUEZ J J G, et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890. doi: 10.1109/tro.2021.3075644http://dx.doi.org/10.1109/tro.2021.3075644
QIN T, LI P L, SHEN S J. VINS-mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020. doi: 10.1109/tro.2018.2853729http://dx.doi.org/10.1109/tro.2018.2853729
ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3): 611-625. doi: 10.1109/tpami.2017.2658577http://dx.doi.org/10.1109/tpami.2017.2658577
FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO: fast semi-direct monocular visual odometry[C]. 2014 IEEE International Conference on Robotics and Automation (ICRA). May 31-June 7, 2014. Hong Kong, China. IEEE, 2014: 15-22. doi: 10.1109/icra.2014.6906584http://dx.doi.org/10.1109/icra.2014.6906584
YUNUS R, LI Y Y, TOMBARI F. ManhattanSLAM: robust planar tracking and mapping leveraging mixture of Manhattan frames[C]. 2021 IEEE International Conference on Robotics and Automation (ICRA). May 30-June 5, 2021. Xi’an, China. IEEE, 2021: 6687-6693. doi: 10.1109/icra48506.2021.9562030http://dx.doi.org/10.1109/icra48506.2021.9562030
WEI H, TANG F L, XU Z W, et al. A point-line VIO system with novel feature hybrids and with novel line predicting-matching[J]. IEEE Robotics and Automation Letters, 2021, 6(4): 8681-8688. doi: 10.1109/lra.2021.3113987http://dx.doi.org/10.1109/lra.2021.3113987
GOMEZ-OJEDA R, MORENO F A, ZUNIGA-NOEL D, et al. PL-SLAM: a stereo SLAM system through the combination of points and line segments[J]. IEEE Transactions on Robotics, 2019, 35(3): 734-746. doi: 10.1109/tro.2019.2899783http://dx.doi.org/10.1109/tro.2019.2899783
COMPANY-CORCOLES J P, GARCIA-FIDALGO E, ORTIZ A. MSC-VO: exploiting Manhattan and structural constraints for visual odometry[J]. IEEE Robotics and Automation Letters, 2022, 7(2): 2803-2810. doi: 10.1109/lra.2022.3142900http://dx.doi.org/10.1109/lra.2022.3142900
GROMPONE VON GIOI R, JAKUBOWICZ J, MOREL J M, et al. LSD: a line segment detector[J]. Image Processing On Line, 2012, 2: 35-55. doi: 10.5201/ipol.2012.gjmr-lsdhttp://dx.doi.org/10.5201/ipol.2012.gjmr-lsd
HE Y J, ZHAO J, GUO Y, et al. PL-VIO: tightly-coupled monocular visual-inertial odometry using point and line features[J]. Sensors, 2018, 18(4): 1159. doi: 10.3390/s18041159http://dx.doi.org/10.3390/s18041159
GOMEZ-OJEDA R, BRIALES J, GONZALEZ-JIMENEZ J. PL-SVO: Semi-direct Monocular Visual Odometry by combining points and line segments[C]. 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). ACM, 2016: 4211–4216. doi: 10.1109/iros.2016.7759620http://dx.doi.org/10.1109/iros.2016.7759620
WEI H, TANG F L, ZHANG C F, et al. Highly efficient line segment tracking with an IMU-KLT prediction and a convex geometric distance minimization[C]. 2021 IEEE International Conference on Robotics and Automation (ICRA). May 30-June 5, 2021. Xi’an, China. IEEE, 2021: 3999-4005. doi: 10.1109/icra48506.2021.9560931http://dx.doi.org/10.1109/icra48506.2021.9560931
ZHANG L L, KOCH R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency[J]. Journal of Visual Communication and Image Representation, 2013, 24(7): 794-805. doi: 10.1016/j.jvcir.2013.05.006http://dx.doi.org/10.1016/j.jvcir.2013.05.006
KIM P, COLTIN B, KIM H J. Low-drift visual odometry in structured environments by decoupling rotational and translational motion[C]. 2018 IEEE International Conference on Robotics and Automation (ICRA). ACM, 2018: 7247-7253. doi: 10.1109/icra.2018.8463207http://dx.doi.org/10.1109/icra.2018.8463207
ZHANG T, LIU C J, LI J Q, et al. A new visual inertial simultaneous localization and mapping (SLAM) algorithm based on point and line features[J]. Drones, 2022, 6(1): 23. doi: 10.3390/drones6010023http://dx.doi.org/10.3390/drones6010023
BURRI M, NIKOLIC J, GOHL P, et al. The EuRoC micro aerial vehicle datasets[J]. International Journal of Robotics Research, 2016, 35(10): 1157-1163. doi: 10.1177/0278364915620033http://dx.doi.org/10.1177/0278364915620033
MENZE M, GEIGER A. Object scene flow for autonomous vehicles[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 7-12, 2015. Boston, MA, USA. IEEE, 2015: 3061-3070.
0
浏览量
9
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构