浏览全部资源
扫码关注微信
1.中国科学院 长春光学精密机械与物理研究所,吉林 长春 130000
2.比亚迪汽车工业有限公司, 广东 深圳 518000
[ "林慧兰(1999-),女,河南信阳人,博士研究生,主要从事计算机视觉方向的研究。E-mail: linhuilan21@mails.ucas.ac.cn" ]
[ "赵春蕾(1989-),女,吉林通化人,博士,助理研究员, 2011年于长春理工大学获得学士学位,2016年于中国科学院长春光学精密机械与物理研究所获得博士学位,主要从事视频压缩传输、计算机视觉方向等方面的研究。E-mail: zhaochunlei@ciomp.ac.cn" ]
纸质出版日期:2024-12-10,
收稿日期:2024-05-24,
修回日期:2024-06-18,
移动端阅览
林慧兰,赵春蕾,郝志成等.复杂场景单目标跟踪[J].光学精密工程,2024,32(23):3490-3503.
LIN Huilan,ZHAO Chunlei,HAO Zhicheng,et al.Single target tracking in complex scenarios[J].Optics and Precision Engineering,2024,32(23):3490-3503.
林慧兰,赵春蕾,郝志成等.复杂场景单目标跟踪[J].光学精密工程,2024,32(23):3490-3503. DOI: 10.37188/OPE.20243223.3490.
LIN Huilan,ZHAO Chunlei,HAO Zhicheng,et al.Single target tracking in complex scenarios[J].Optics and Precision Engineering,2024,32(23):3490-3503. DOI: 10.37188/OPE.20243223.3490.
为提高目标形变、遮挡、相似干扰以及视野超出等复杂场景的目标跟踪性能,提出一种复杂场景单目标跟踪算法。基于Staple算法研究二维高斯函数像素权重赋予问题,优化颜色直方图统计,增强目标与背景区分度。引入基于峰值旁瓣比(Peak Side Lobe Ratio, PSR)的HOG特征、颜色特征的自适应融合机制,合理选择融合系数,确保混合特征更加可靠。分析目标区域中心与上一帧目标中心距离,结合最大混合响应计算最佳中心位置,解决相似目标干扰问题。采用混合响应、HOG特征、平均峰值相关能量(Average Peak-to-Correlation Energy, APCE)判定目标丢失、遮挡情况,保持目标框位置,实现目标的及时重新跟踪。采用结合之前帧和当前帧信息的模板更新策略,进一步提升跟踪精度,并在OTB100数据集中涉及形变、遮挡、视野超出3个属性视频上测试。实验结果表明,改进算法在整体和特定属性(形变、遮挡、出视野)的成功率及形变属性的精确度上,较Staple算法分别提升了1.8%,3.3%,2%和9%;在VOT16数据集上,改进算法在整体和遮挡属性上,重叠度较Staple提升了0.022 2和0.019 6,满足复杂的特定场景下的目标跟踪需求。
To address challenges in single-target tracking under complex scenarios such as target deformation, occlusion, similar interference, and out-of-view situations, a novel tracking algorithm is proposed. Building on the Staple algorithm, the method optimizes pixel weight assignment using a two-dimensional Gaussian function and enhances the color histogram to improve target-background distinguishability. An adaptive fusion mechanism based on the Peak Side Lobe Ratio (PSR) is introduced to combine HOG and color features, with carefully selected fusion coefficients ensuring feature reliability. The target's optimal center position is determined by analyzing the distance between the current and previous frame centers, alongside the maximum composite response, effectively mitigating interference from similar targets. Target loss or occlusion is identified using composite response, HOG features, and Average Peak-to-Correlation Energy (APCE), maintaining the target frame's position and enabling timely re-tracking upon reappearance. A template update strategy combining past and current frame information further enhances tracking accuracy. Tests on the OTB100 dataset with deformation, occlusion, and out-of-view scenarios show that the improved algorithm increases overall and specific attribute success rates (deformation, occlusion, out-of-view) by 1.8%, 3.3%, 2%, and deformation precision by 9% compared to the Staple algorithm. On the VOT16 dataset, the overlap rate for overall and occlusion attributes improves by 0.022 2 and 0.019 6 respectively, meeting the demands of target tracking in complex scenarios.
单目标跟踪复杂场景背景抑制相似目标再识别丢失判定
single object trackingcomplex scenariosbackground suppressionsimilar target re-identificationloss determination
赵恩喆. 基于光流法与目标跟踪网络结合的视频行人跟踪研究[D]. 哈尔滨: 哈尔滨工业大学, 2022. doi: 10.1007/978-3-031-43789-2_11http://dx.doi.org/10.1007/978-3-031-43789-2_11
ZHAO E ZH. Research on Video Pedestrian Tracking Based on Optical Flow Method and Target Tracking Network[D].Harbin: Harbin Institute of Technology, 2022. (in Chinese). doi: 10.1007/978-3-031-43789-2_11http://dx.doi.org/10.1007/978-3-031-43789-2_11
徐萌, 路稳, 方澄, 等. 融合光流特征和显著性检测的目标跟踪算法[J]. 计算机应用与软件, 2024, 41(2): 164-171, 187.
XU M, LU W, FANG CH, et al. Target tracking algorithm combining optical flow features and saliency detection[J]. Computer Applications and Software, 2024, 41(2): 164-171, 187.(in Chinese)
张红颖, 贺鹏艺, 彭晓雯. 基于改进高分辨率神经网络的多目标行人跟踪[J]. 光学 精密工程, 2023, 31(6): 860-871. doi: 10.37188/ope.20233106.0860http://dx.doi.org/10.37188/ope.20233106.0860
ZHANG H Y, HE P Y, PENG X W. Multi-object pedestrian tracking method based on improved high resolution neural network[J]. Opt. Precision Eng., 2023, 31(6): 860-871.(in Chinese). doi: 10.37188/ope.20233106.0860http://dx.doi.org/10.37188/ope.20233106.0860
ALI A, TERADA K. A general framework for Multi-Human tracking using Kalman filter and fast mean shift algorithms[J]. J. Univers. Comput. Sci., 2010, 16: 921-937. doi: 10.1109/iccvw.2009.5457591http://dx.doi.org/10.1109/iccvw.2009.5457591
VOJIR T, NOSKOVA J, MATAS J. Robust scale-adaptive mean-shift for tracking[J]. Pattern Recognition Letters, 2014, 49: 250-258. doi: 10.1016/j.patrec.2014.03.025http://dx.doi.org/10.1016/j.patrec.2014.03.025
胡亮, 杨德贵, 王行, 等. 基于改进MEANSHIFT的可见光低小慢目标跟踪算法[J]. 信号处理, 2022, 38(4): 824-834.
HU L, YANG D G, WANG X, et al. Visible light low-small-slow-target tracking algorithm based on improved MEANSHIFT[J]. Journal of Signal Processing, 2022, 38(4): 824-834. (in Chinese)
BOLME D S, BEVERIDGE J R, DRAPER B A, et al. Visual object tracking using adaptive correlation filters[C]. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, CA, USA. IEEE, 2010: 2544-2550. doi: 10.1109/cvpr.2010.5539960http://dx.doi.org/10.1109/cvpr.2010.5539960
HENRIQUES J F, CASEIRO R, MARTINS P, et al. Exploiting the circulant structure of tracking-by-detection with kernels[C]. European Conference on Computer Vision. Berlin, Heidelberg: Springer, 2012: 702-715. doi: 10.1007/978-3-642-33765-9_50http://dx.doi.org/10.1007/978-3-642-33765-9_50
董超, 郑兵, 李彬, 等. 改进核相关滤波器的海上船只目标跟踪[J]. 光学 精密工程, 2019, 27(4): 911-921. doi: 10.3788/ope.20192704.0911http://dx.doi.org/10.3788/ope.20192704.0911
DONG CH, ZHENG B, LI B, et al. Shiptarget tracking with improved kernelized correlation filters[J]. Opt. Precision Eng., 2019, 27(4): 911-921.(in Chinese). doi: 10.3788/ope.20192704.0911http://dx.doi.org/10.3788/ope.20192704.0911
HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583-596. doi: 10.1109/tpami.2014.2345390http://dx.doi.org/10.1109/tpami.2014.2345390
LI Y, ZHU J K. A Scale Adaptive Kernel Correlation Filter Tracker with Feature Integration[M]. Computer Vision-ECCV 2014 Workshops. Cham: Springer International Publishing, 2015: 254-265. doi: 10.1007/978-3-319-16181-5_18http://dx.doi.org/10.1007/978-3-319-16181-5_18
DANELLJAN M, HAGER G, KHAN F S, et al. Discriminative scale space tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(8): 1561-1575. doi: 10.1109/tpami.2016.2609928http://dx.doi.org/10.1109/tpami.2016.2609928
BERTINETTO L, VALMADRE J, GOLODETZ S, et al. Staple: complementary learners for real-time tracking[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA. IEEE, 2016: 1401-1409. doi: 10.1109/cvpr.2016.156http://dx.doi.org/10.1109/cvpr.2016.156
ZHANG J, MA S, SCLAROFF S. MEEM: robust tracking via multiple experts using entropy minimization[C]. Proceedings of the Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, 6-12, Springer, 2014: Part VI 13. doi: 10.1007/978-3-319-10599-4_13http://dx.doi.org/10.1007/978-3-319-10599-4_13
HARE S, GOLODETZ S, SAFFARI A, et al. Struck: structured output tracking with kernels[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10): 2096-2109. doi: 10.1109/tpami.2015.2509974http://dx.doi.org/10.1109/tpami.2015.2509974
KALAL Z, MIKOLAJCZYK K, MATAS J. Tracking-learning-detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409-1422. doi: 10.1109/tpami.2011.239http://dx.doi.org/10.1109/tpami.2011.239
GALOOGAHI H K, FAGG A, LUCEY S. Learning background-aware correlation filters for visual tracking[C]. 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy. IEEE, 2017: 1144-1152. doi: 10.1109/iccv.2017.129http://dx.doi.org/10.1109/iccv.2017.129
VALMADRE J, BERTINETTO L, HENRIQUES J, et al. End-to-end representation learning for correlation filter based tracking[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA. IEEE, 2017: 5000-5008. doi: 10.1109/cvpr.2017.531http://dx.doi.org/10.1109/cvpr.2017.531
BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional siamese networks for object tracking[C]. Proceedings of the Computer Vision-ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, Springer, 2016:Part II 14, F. doi: 10.1007/978-3-319-48881-3_56http://dx.doi.org/10.1007/978-3-319-48881-3_56
YANG K, ZHANG H J, SHI J Y, et al. BANDT: a border-aware network with deformable transformers for visual tracking[J]. IEEE Transactions on Consumer Electronics, 2023, 69(3): 377-390. doi: 10.1109/tce.2023.3251407http://dx.doi.org/10.1109/tce.2023.3251407
LIN L T, FAN H, XU Y, et al. SwinTrack: a simple and strong baseline for transformer tracking [J]. Advances in Neural Information Processing Systems, 2022, 35: 16743-16754.
CETINTAS O, BRASÓ G, LEAL-TAIXÉ L. Unifying short and long-term tracking with graph hierarchies[C]. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, BC, Canada. IEEE, 2023: 22877-22887. doi: 10.1109/cvpr52729.2023.02191http://dx.doi.org/10.1109/cvpr52729.2023.02191
MAGGIOLINO G, AHMAD A, CAO J K, et al. Deep OC-sort: multi-pedestrian tracking by adaptive re-identification[C]. 2023 IEEE International Conference on Image Processing (ICIP). Kuala Lumpur, Malaysia. IEEE, 2023: 3025-3029. doi: 10.1109/icip49359.2023.10222576http://dx.doi.org/10.1109/icip49359.2023.10222576
DU Y H, ZHAO Z C, SONG Y, et al. StrongSORT: make DeepSORT great again[J]. IEEE Transactions on Multimedia, 1809, 25: 8725-8737.
KIM V, JUNG G, LEE S W. AM-SORT: adaptable motion predictor with historical trajectory embedding for multi-object tracking [J]. arXiv preprint arXiv:240113950, 2024. doi: 10.1007/978-981-97-8702-9_7http://dx.doi.org/10.1007/978-981-97-8702-9_7
WU Y, LIM J, YANG M H. Online object tracking: a benchmark[C]. 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR, USA. IEEE, 2013: 2411-2418. doi: 10.1109/cvpr.2013.312http://dx.doi.org/10.1109/cvpr.2013.312
HUA G, JÉGOU H. Computer Vision-ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part II[M]. Cham: Springer International Publishing, 2016. doi: 10.1007/978-3-319-48881-3http://dx.doi.org/10.1007/978-3-319-48881-3
CHENG S Y, ZHONG B N, LI G R, et al. Learning to filter: Siamese relation network for robust tracking[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, TN, USA. IEEE, 2021: 4419-4429. doi: 10.1109/cvpr46437.2021.00440http://dx.doi.org/10.1109/cvpr46437.2021.00440
0
浏览量
202
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构