浏览全部资源
扫码关注微信
1.中国科学院大学,北京 100049
2.中国科学院 空间应用工程与技术中心,北京 100094
3.中国科学院 太空应用重点实验室,北京 100094
[ "孙运达(1995-),男,河北承德人,硕士研究生,2018年于北京交通大学获得学士学位,主要从事机器视觉方面的研究。E-mail: sunyunda18@csu.ac.cn" ]
[ "万 雪(1988-),女,湖北武汉人,研究员,硕士生导师,2015年于英国帝国理工大学获得博士学位,主要从事计算机视觉算法研究。E-mail:wanxue@csu.ac.cn" ]
收稿日期:2021-04-29,
修回日期:2021-05-28,
移动端阅览
孙运达,万雪,李盛阳.基于孪生网络的航天器部件追踪[J].光学精密工程,
SUN Yun-da,WAN Xue,LI Sheng-yang.Siamese network based satellite component tracking[J].Optics and Precision Engineering,
孙运达,万雪,李盛阳.基于孪生网络的航天器部件追踪[J].光学精密工程, DOI:10.37188/OPE..0001
SUN Yun-da,WAN Xue,LI Sheng-yang.Siamese network based satellite component tracking[J].Optics and Precision Engineering, DOI:10.37188/OPE..0001
为了满足空间任务实施过程中对航天器部件的精细定位需求,针对相同类别部件易出现的混淆问题,本文提出了一种基于孪生网络结构的航天器部件追踪算法。首先通过神经网络模型将航天器部件追踪问题描述为基于数据驱动的航天器部件相似性度量问题,以改进AlexNet网络结构为孪生单元设计本文所用孪生网络模型。其次,使用公开大型数据集GOT-10k训练孪生网络,以随机梯度下降作为网络优化方法,提升网络表征能力。最后针对航天器同类部件外观相似造成的定位混淆问题,提出一种结合运动时序特征的追踪策略,提高了追踪精度。以ESA公开的航天器视频数据作为测试数据,验证所提出算法性能,实验结果表明:本文所提出算法在未使用航天器相关数据训练的条件下,当部件定位框与部件真值图像面积重合度为50%时,航天器部件的追踪成功率能够达到93.4%,速度达到38FPS,基本满足航天器部件追踪稳定可靠、精度高、抗干扰能力强等要求。
In order to meet the requirements for precise positioning of spacecraft components during the implementation of space missions, this paper proposes a spacecraft component tracking algorithm based on the Siamese network structure for the confusion problem of similar components. Firstly, the space craft component tracking problem is modeled as a data-driven spacecraft component similarity measurement problem via the neural network. This paper designed a Siamese network by improving the structure of the AlexNet network as the Siamese unit. Secondly, a large public dataset GOT-10k is used to train the Siamese network. Stochastic gradient descent is used as a network optimization method to improve the network representation ability. Finally, aiming at the positioning confusion caused by the similar appearance of the similar parts of the spacecraft, a tracking strategy combining the characteristics of the motion sequence is proposed to improve the tracking accuracy. The spacecraft video data published by ESA is used as testing data to verify the performance of the proposed algorithm. The experimental results show that the proposed algorithm, without using spacecraft-related data for training, achieves the tracking success rate of the spacecraft components 93.4% in the speed of 38FPS when the area of the part positioning frame and the true value image of the part coincide is 50%. This demonstrate that the proposed method is able to meet the requirements of stable and reliable tracking of spacecraft components, high precision, and strong anti-interference ability.
CHEN B , CAO J W , PARRA A , et al . Satellite pose estimation with deep landmark regression and nonlinear pose refinement [C]. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) . 2728,2019 , Seoul , Korea (South) . IEEE , 2019 : 2816 - 2824 .
You S , Zhu H , Li M , et al . A Review of Visual Trackers and Analysis of its Application to Mobile Robot [EB/OL] arXiv: 1910.09761 , 2019 .
LI B , WU W , WANG Q , et al . SiamRPN++: evolution of Siamese visual tracking with very deep networks [C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . 1520,2019 , Long Beach, CA, USA . IEEE , 2019 : 4277 - 4286 .
MARVASTI-ZADEH S M , CHENG L , GHANEI-YAKHDAN H , et al . Deep learning for visual tracking: a comprehensive survey [J]. IEEE Transactions on Intelligent Transportation Systems , 6478 , PP( 99 ): 1 - 26 .
ZOU Z X , SHI Z W , GUO Y H , et al . Object detection in 20 years: a survey [EB/OL]. arXiv: 1905.05005 , 2019 .
赵浩光 , 王平 , 董超 , 等 . 结合多尺度视觉显著性的舰船目标检测 [J]. 光学 精密工程 , 2020 , 28 ( 6 ): 1395 - 1403 .
ZHAO H G , WANG P , DONG CH , et al . Ship detection based on the multi-scale visual saliency model [J]. Opt. Precision Eng. , 2020 , 28 ( 6 ): 1395 - 1403 . (in Chinese)
谷雨 , 刘俊 , 沈宏海 , 等 . 基于改进多尺度分形特征的红外图像弱小目标检测 [J]. 光学 精密工程 , 2020 , 28 ( 6 ): 1375 - 1386 .
GU Y , LIU J , SHEN H H , et al . Infrared dim-small target detection based on an improved multiscale fractal feature [J]. Opt. Precision Eng. , 2020 , 28 ( 6 ): 1375 - 1386 . (in Chinese)
王建林 , 付雪松 , 黄展超 , 等 . 改进YOLOv2卷积神经网络的多类型合作目标检测 [J]. 光学 精密工程 , 2020 , 28 ( 1 ): 251 - 260 .
WANG J L , FU X S , HUANG ZH CH , et al . Multi-type cooperative targets detection using improved YOLOv2 convolutional neural network [J]. Opt. Precision Eng. , 2020 , 28 ( 1 ): 251 - 260 . (in Chinese)
BAR-SHALOM Y , FORTMANN T E , CABLE P G . Tracking and data association [J]. The Journal of the Acoustical Society of America , 1990 , 87 ( 2 ): 918 - 919 .
COMANICIU D , MEER P . Mean shift: a robust approach toward feature space analysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2002 , 24 ( 5 ): 603 - 619 .
HARE S , GOLODETZ S , SAFFARI A , et al . Struck: structured output tracking with kernels [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2016 , 38 ( 10 ): 2096 - 2109 .
KALAL Z , MIKOLAJCZYK K , MATAS J . Tracking-learning-detection [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2012 , 34 ( 7 ): 1409 - 1422 .
BOLME D S , BEVERIDGE J R , DRAPER B A , et al . Visual object tracking using adaptive correlation filters [C]. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition . 1318,2010 , San Francisco, CA, USA . IEEE , 2010 : 2544 - 2550 .
HENRIQUES J F , CASEIRO R , MARTINS P , et al . High-speed tracking with kernelized correlation filters [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2015 , 37 ( 3 ): 583 - 596 .
DANELLJAN M , HÄGER G , KHAN F S , et al . Learning spatially regularized correlation filters for visual tracking [C]. 2015 IEEE International Conference on Computer Vision (ICCV) . 713,2015 , Santiago, Chile . IEEE , 2015 : 4310 - 4318 .
HONG S , YOU T , KWAK S , et al . Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network [C]. 2015 Proceedings of the 32 nd International Conference on Machine Learning, Lille, France , 2015 . JMLR: W&CP ,2015: 597 - 606 .
BERTINETTO L , VALMADRE J , HENRIQUES J F , et al . Fully-convolutional Siamese networks for object tracking [C]. Computer Vision - ECCV 2016 Workshops , 2016 : 850 - 865 .
KRIZHEVSKY A , SUTSKEVER I , HINTON G E . ImageNet classification with deep convolutional neural networks [J]. Communications of the ACM , 2017 , 60 ( 6 ): 84 - 90 .
HUANG L H , ZHAO X , HUANG K Q . GOT-10k: a large high-diversity benchmark for generic object tracking in the wild [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2021 , 43 ( 5 ): 1562 - 1577 .
0
浏览量
507
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构