浏览全部资源
扫码关注微信
1.阜阳师范学院 计算机与信息工程学院, 安徽 阜阳 236037
2.安徽大学 计算机科学与技术学院, 安徽 合肥 230601
[ "范建中(1973-), 男, 安徽黄山人, 硕士, 讲师, 1997年于安徽建筑工业学院获得学士学位, 2005年于华侨大学获得硕士学位, 主要从事图形图像处理, 算法等方面的研究.E-mail:fjz_73@126.com" ]
[ "王秀友(1975-), 男, 安徽宿州人, 硕士, 副教授, 1998年于阜阳师范学院获得学士学位, 2008年于安徽大学获得硕士学位, 主要研究领域为计算机辅助几何设计与图形学, 图像处理与模式识别.E-mail:wangxiuyou@163.com" ]
收稿日期:2017-04-21,
录用日期:2017-6-22,
纸质出版日期:2017-09-25
移动端阅览
王秀友, 范建中, 刘华明, 等. 自适应交互式融合的视觉跟踪[J]. 光学 精密工程, 2017,25(9):2499-2507.
Xiu-you WANG, Jian-zhong FAN, Hua-ming LIU, et al. Visual tracking via adaptive interactive fusion[J]. Optics and precision engineering, 2017, 25(9): 2499-2507.
王秀友, 范建中, 刘华明, 等. 自适应交互式融合的视觉跟踪[J]. 光学 精密工程, 2017,25(9):2499-2507. DOI: 10.3788/OPE.20172509.2499.
Xiu-you WANG, Jian-zhong FAN, Hua-ming LIU, et al. Visual tracking via adaptive interactive fusion[J]. Optics and precision engineering, 2017, 25(9): 2499-2507. DOI: 10.3788/OPE.20172509.2499.
针对基于传统融合机制的联合跟踪器在复杂环境下鲁棒性不足的缺陷,提出一种在交互式多模型粒子滤波框架下传递概率矩阵可在线更新的自适应融合跟踪器。首先,在贝叶斯理论框架下,基于最小二乘误差估计法得到传递概率矩阵迭代更新方程;然后,利用数值积分法获得迭代更新方程的数值解;最后,结合重采样技术实现不同子跟踪器之间先验状态分布的自适应交互,以确保传递权值较大粒子对应的目标状态。在复杂环境下进行了的跟踪实验,结果验证了本文提出的自适应交互式融合机制增加了对粒子先验状态的校正功能,有效避免了因误差积累导致的"跟踪漂移"问题,使联合跟踪器的鲁棒性明显优于单一跟踪器或基于其它融合机制的联合跟踪器。
As collaborative trackers based on traditional fusion strategy has poor robustness in complex environments
a novel adaptive interactive fusion tracking strategy based on the online updated transition probability matrix in a multiple model particle filter framework was proposed. Firstly
an iterative updating equation was obtained based on minimum mean square error estimation method based on the Bayes theory. Then
the numerical solution of the iterative equation was obtained by numerical integration algorithm. Finally
with the updated TPM and re-sampling technology
the adaptive interaction of prior state distributions for different trackers was achieved to guarantee the target state of transmitted particles with larger weights. Tracking experiments were performed in complex environments. The results demonstrate that the proposed adaptive interactive fusion strategy improves the correction function for Particle prior state and effectively avoids the 'tracking drifting' problem from error accumulation. So
the robustness of proposed collaborative tracker is more better than those single trackers or collaborative trackers based other fusion strategy.
NUMMIARO K, KOLLER-MEIER E, VAN GOOL L. An adaptive color-based particle filter [J]. Image and Vision Computing, 2003, 21(1):99-110.
ZHUANG B H, LU H C, XIAO Z Y, et al.. Visual tracking via discriminative sparse similarity map [J]. IEEE Transactions on Image Processing, 2014, 23(4):1872-1881.
王齐, 金小峰.复杂环境中车辆检测与跟踪方法的研究[J].液晶与显示, 2016, 31(5):511-517.
WANG Q, JIN X F.Vehicle detecting and tracking method in complex environments [J]. Chinese Journal of Liquid Crystals and Displays, 2016, 31(5):511-517. (in Chinese)
崔雄文, 吴钦章, 蒋平, 等.子空间模型下的仿射不变目标跟踪[J].红外与激光工程, 2015, 44(2):769-774.
CUI X W, WU Q ZH, JIANG P, et al.. Affine-invariant target tracking based on subspace representation [J]. Infrared and Laser Engineering, 2015, 44(2):769-774. (in Chinese)
邢运龙, 李艾华, 崔智高, 等.改进核相关滤波的运动目标跟踪算法[J].红外与激光工程, 2016, 45(S1):214-221.
XING Y L, LI A H, CUI ZH G, et al.. Moving target tracking algorithm based on improved Kernelized correlation filter [J]. Infrared and Laser Engineering, 2016, 45(S1):214-221. (in Chinese)
赵云峰.结合自适应核函数的Mean-shift改进算法[J].液晶与显示, 2016, 31(12):1143-1148.
ZHAO Y F.Improved mean-shift algorithm combined with adaptive kernel function [J]. Chinese Journal of Liquid Crystals and Displays, 2016, 31(12):1143-1148. (in Chinese)
CEHOVIN L, KRISTAN M, LEONARDIS A. Robust visual tracking using an adaptive coupled-layer visual model [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(4):941-953.
ZHONG W, LU H C, YANG M H. Robust object tracking via sparsity-based collaborative model [C]. 2012 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2012:1838-1845.
GAO Y, JI R R, ZHANG L F, et al.. Symbiotic tracker ensemble toward a unified tracking framework [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2014, 24(7):1122-1131.
BIRESAW T A, CAVALLARO A, REGAZZONI C S. Tracker-level fusion for robust Bayesian visual tracking [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2015, 25(5):776-789.
JILKOV V P, LI X R. Online Bayesian estimation of transition probabilities for Markovian jump systems [J]. IEEE Transactions on Signal Processing, 2004, 52(6):1620-1630.
SANMIGUEL J C, CAVALLARO A, MARTINEZ J M. Adaptive online performance evaluation of video trackers [J]. IEEE Transactions on Image Processing, 2012, 21(5):2812-2823.
WU Y, LIM J, YANG M H. Object tracking benchmark [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9):1834-1848.
ROSS D A, LIM J, LIN R S, et al.. Incremental learning for robust visual tracking [J]. International Journal of Computer Vision, 2008, 77(1-3):125-141.
BABENKO B, YANG M H, BELONGIE S. Visual tracking with online multiple instance learning [C]. IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2009:983-990.
0
浏览量
740
下载量
2
CSCD
关联资源
相关文章
相关作者
相关机构