浏览全部资源
扫码关注微信
国防科技大学 航天科学与工程学院, 图像测量与视觉导航湖南省重点实验室,湖南 长沙,410073
收稿日期:2015-01-19,
修回日期:2015-03-06,
纸质出版日期:2015-06-25
移动端阅览
张跃强, 苏昂, 刘海波等. 基于多直线对应和加权最小二乘的位姿估计[J]. 光学精密工程, 2015,23(6): 1722-1731
ZHANG Yue-qiang, SU Ang, LIU Hai-bo etc. Pose estimation based on multiple line hypothesis and iteratively reweighted least squares[J]. Editorial Office of Optics and Precision Engineering, 2015,23(6): 1722-1731
张跃强, 苏昂, 刘海波等. 基于多直线对应和加权最小二乘的位姿估计[J]. 光学精密工程, 2015,23(6): 1722-1731 DOI: 10.3788/OPE.20152306.1722.
ZHANG Yue-qiang, SU Ang, LIU Hai-bo etc. Pose estimation based on multiple line hypothesis and iteratively reweighted least squares[J]. Editorial Office of Optics and Precision Engineering, 2015,23(6): 1722-1731 DOI: 10.3788/OPE.20152306.1722.
为了求解复杂环境下目标的位姿
提出了基于多直线对应的加权最小二乘位姿估计算法.首先对模型直线进行等间隔采样
并沿采样点投影法线方向搜索图像点对应;然后利用图像点对应局部和全局特性对样本点进行加权;最后通过优化法向距离实现目标位姿的优化求解.为了解决模型-图像对应错误引起的优化失败问题
算法在模型-图像点匹配阶段为每个采样点保留多个图像点对应
通过随机Hough变换(RHT)算法将图像点对应约束在直线上
并为每条模型直线保留多图像直线对应. 在对样本点进行加权时
综合考虑了样本点自身的属性和样本点同周围点的关系
有效提高了算法对纹理
背景
噪声等的鲁棒性.实验结果表明:提出的方法能够实现复杂环境下目标位姿的优化求解
其在
x
方向、
y
方向和
z
方向的角度估计误差分别优于0.4
0.3和0.1°在垂直光轴方向和沿光轴方向的相对位置误差则分别优于0.03%和0.1%.相比单假设方法
提出的方法能够更有效地克服复杂背景干扰
实现特殊视图目标位姿的稳定估计.
To estimate the pose of known rigid objects efficiently in a complex environment
a rigid object pose estimation method was proposed by combining multiple line hypothesis and iteratively reweighted least squares. The 1D search was utilized to obtain the corresponding image point along a normal direction for each model sample point by an equal interval sampling. Then
the weight of a sample point was calculated according to the local and global appearances of the corresponding image point for each visible model sample point. The optimized pose parameters were obtained by minimizing the errors between the sample points and their corresponding image points. To avoid the failure of the pose optimization caused by the mismatches of the model and image lines
multiple low level hypotheses were retained for each model sample point in the registration process and they were classified into multiple lines for each potential edge by the Random Hough Transform(RHT). Due to the use of the property of the sample point as well as the relation to the neighbor points
the robustness to disordered background and noise was enhanced in the weighting process. Experiments show the proposed method effectively estimates the poses of freely moving objects in an unconstrained environment. The precisions of the poses on
x
y
and
z
axes are better than 0.4°
0.3° and 0.1° respectively; and those of relative positions perpendicular to the optical axis and along the optical axis are better than 0.03% and 0.1% respectively. Comparisons with the single hypothesis based method demonstrate that the proposed method overcomes the influence induced by the complex background and optimizes the pose parameters in special views.
耿明超, 赵铁石, 边 辉, 等. 基于并联理论的单目视觉位姿估计[J]. 光学 精密工程, 2013, 21(10): 2617-2626. GENG M CH, ZHAO T SH, BIAN H, et al.. Pose estimation of monocular vision based on parallel theory [J]. Opt. Precision Eng., 2013, 21(10): 2617-2626.(in Chinese)
吕耀文,王建立,王昊京,等. 应用抛物线运动估计摄像机姿态[J]. 光学 精密工程, 2014, 22(4): 1078-1085 LV Y W, WANG J L, WANG H J, et al.. Estimation of camera poses by parabolic motion [J]. Opt. Precision Eng., 2014, 22(4): 1078-1085. (in Chinese)
ZHANG L L, XU C, LEE K M, et al.. Robust and efficient pose estimation from line correspondences [C]. Computer Vison-ACCV 2012 Lecture Notes in Computer Science, 2013,2013:217-230.
MIRZAEI F M, ROUMELIOTIS S I. Globally optimal pose estimation from line correspondences [C]. IEEE International Conference on Robotics and Automation (ICRA), 2011, 5581-5588.
ABABSA F, MALLEM M. Robust camera pose tracking for augmented reality using particle filtering framework [J]. Machine Vision and Applications, 2011, 22(1): 181-195.
KLEIN G, MURRAY D. Full-3D edge tracking with a particle filter [C]. British Machine Vision Conference (BMVC), 2006, 1119-1128.
HARRIS C,STENNETT C. RAPID-a video rate object tracker [C]. British Machine Vision Conference (BMVC), 1990, 73-78.
CHOI C, CHRISTENSEN H I. Real-time 3D model-based tracking using edge and keypoint features for robotic manipulation [C]. 2010 IEEE International Conference on Robotics and Automation (ICRA), 2010, 4048-4055.
VACCHETTI L, LEPETIT V, FUA P. Stable real-time 3D tracking using online and offline information [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2004, 26(10): 1385-1391.
PRESSIGOUT M, MARCHAND E. Real-time 3D model-based tracking: combining edge and texture information [C]. IEEE International Conference on Robotics and Automation (ICRA), 2006, 2726-2731.
VACCHETTI L, LEPETIT V, FUA P. Combining edge and texture information for real-time accurate 3d camera tracking [C]. Proceeding of the 3rd ISMAR04 IEEE/ACM International Symposium on Mixed and Augmented Reality (ISMAR), 2004, 48-56.
PETIT A,MARCHAND E, KANANI K. A robust model-based tracker combining geometrical and color edge information [C]. 2913 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013, 3719-3724.
PANIN G, ROTH E, KNOLL A. Robust contour-based object tracking integrating color and edge likelihoods [C]. Proceedings of the Vision, Modeling, and Visualization Conference, 2008, 227-234.
MOUGHLBAY A A, CERVERA E, MARTINET P. Model based visual servoing tasks with an autonomous humanoid robot UR [J]. Frontiers of Intelligent Autonomous Systems Studies in Computational Intelligence, 2013,466: 149-162.
EULIERE C,MARCHAND E, ECK L. Using multiple hypothesis in model-based tracking [C]. IEEE International Conference on Robotics and Automation (ICRA), 2010, 4559-4565.
BROWN J A,CAPSON D W. A framework for 3D model-based visual tracking using a GPU-accelerated particle filter [J]. IEEE Transactions on Visualization and Computer Graphics, 2012, 18(1): 68-80.
PUPILLI M, CALWAY A. Real-time camera tracking using known 3d models and a particle filter [C]. International Conference on Pattern Recognition (ICPR), 2006, 199-203.
DRUMMOND T,CIPOLLA R. Real-time visual tracking of complex structures [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2002, 24(7): 932-946.
于起峰,尚洋. 摄像测量学原理与应用研究[M]. 北京: 科学出版社, 2009. YU Q F,SHANG Y. Videometrics: Principles and Researches [M]. Beijing: Science Press, 2009. (in Chinese)
ZHANG Z Y. A flexible new technique for camera calibration [R]. Microsoft Corporation: Technical Report, MSR-TR-98-71, 1998.
0
浏览量
495
下载量
7
CSCD
关联资源
相关文章
相关作者
相关机构