ZHANG Yue-qiang, SU Ang, LIU Hai-bo etc. Pose estimation based on multiple line hypothesis and iteratively reweighted least squares[J]. Editorial Office of Optics and Precision Engineering, 2015,23(6): 1722-1731
ZHANG Yue-qiang, SU Ang, LIU Hai-bo etc. Pose estimation based on multiple line hypothesis and iteratively reweighted least squares[J]. Editorial Office of Optics and Precision Engineering, 2015,23(6): 1722-1731 DOI: 10.3788/OPE.20152306.1722.
Pose estimation based on multiple line hypothesis and iteratively reweighted least squares
To estimate the pose of known rigid objects efficiently in a complex environment
a rigid object pose estimation method was proposed by combining multiple line hypothesis and iteratively reweighted least squares. The 1D search was utilized to obtain the corresponding image point along a normal direction for each model sample point by an equal interval sampling. Then
the weight of a sample point was calculated according to the local and global appearances of the corresponding image point for each visible model sample point. The optimized pose parameters were obtained by minimizing the errors between the sample points and their corresponding image points. To avoid the failure of the pose optimization caused by the mismatches of the model and image lines
multiple low level hypotheses were retained for each model sample point in the registration process and they were classified into multiple lines for each potential edge by the Random Hough Transform(RHT). Due to the use of the property of the sample point as well as the relation to the neighbor points
the robustness to disordered background and noise was enhanced in the weighting process. Experiments show the proposed method effectively estimates the poses of freely moving objects in an unconstrained environment. The precisions of the poses on
x
y
and
z
axes are better than 0.4°
0.3° and 0.1° respectively; and those of relative positions perpendicular to the optical axis and along the optical axis are better than 0.03% and 0.1% respectively. Comparisons with the single hypothesis based method demonstrate that the proposed method overcomes the influence induced by the complex background and optimizes the pose parameters in special views.
关键词
Keywords
references
耿明超, 赵铁石, 边 辉, 等. 基于并联理论的单目视觉位姿估计[J]. 光学 精密工程, 2013, 21(10): 2617-2626. GENG M CH, ZHAO T SH, BIAN H, et al.. Pose estimation of monocular vision based on parallel theory [J]. Opt. Precision Eng., 2013, 21(10): 2617-2626.(in Chinese)
吕耀文,王建立,王昊京,等. 应用抛物线运动估计摄像机姿态[J]. 光学 精密工程, 2014, 22(4): 1078-1085 LV Y W, WANG J L, WANG H J, et al.. Estimation of camera poses by parabolic motion [J]. Opt. Precision Eng., 2014, 22(4): 1078-1085. (in Chinese)
ZHANG L L, XU C, LEE K M, et al.. Robust and efficient pose estimation from line correspondences [C]. Computer Vison-ACCV 2012 Lecture Notes in Computer Science, 2013,2013:217-230.
MIRZAEI F M, ROUMELIOTIS S I. Globally optimal pose estimation from line correspondences [C]. IEEE International Conference on Robotics and Automation (ICRA), 2011, 5581-5588.
ABABSA F, MALLEM M. Robust camera pose tracking for augmented reality using particle filtering framework [J]. Machine Vision and Applications, 2011, 22(1): 181-195.
KLEIN G, MURRAY D. Full-3D edge tracking with a particle filter [C]. British Machine Vision Conference (BMVC), 2006, 1119-1128.
HARRIS C,STENNETT C. RAPID-a video rate object tracker [C]. British Machine Vision Conference (BMVC), 1990, 73-78.
CHOI C, CHRISTENSEN H I. Real-time 3D model-based tracking using edge and keypoint features for robotic manipulation [C]. 2010 IEEE International Conference on Robotics and Automation (ICRA), 2010, 4048-4055.
VACCHETTI L, LEPETIT V, FUA P. Stable real-time 3D tracking using online and offline information [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2004, 26(10): 1385-1391.
PRESSIGOUT M, MARCHAND E. Real-time 3D model-based tracking: combining edge and texture information [C]. IEEE International Conference on Robotics and Automation (ICRA), 2006, 2726-2731.
VACCHETTI L, LEPETIT V, FUA P. Combining edge and texture information for real-time accurate 3d camera tracking [C]. Proceeding of the 3rd ISMAR04 IEEE/ACM International Symposium on Mixed and Augmented Reality (ISMAR), 2004, 48-56.
PETIT A,MARCHAND E, KANANI K. A robust model-based tracker combining geometrical and color edge information [C]. 2913 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013, 3719-3724.
PANIN G, ROTH E, KNOLL A. Robust contour-based object tracking integrating color and edge likelihoods [C]. Proceedings of the Vision, Modeling, and Visualization Conference, 2008, 227-234.
MOUGHLBAY A A, CERVERA E, MARTINET P. Model based visual servoing tasks with an autonomous humanoid robot UR [J]. Frontiers of Intelligent Autonomous Systems Studies in Computational Intelligence, 2013,466: 149-162.
EULIERE C,MARCHAND E, ECK L. Using multiple hypothesis in model-based tracking [C]. IEEE International Conference on Robotics and Automation (ICRA), 2010, 4559-4565.
BROWN J A,CAPSON D W. A framework for 3D model-based visual tracking using a GPU-accelerated particle filter [J]. IEEE Transactions on Visualization and Computer Graphics, 2012, 18(1): 68-80.
PUPILLI M, CALWAY A. Real-time camera tracking using known 3d models and a particle filter [C]. International Conference on Pattern Recognition (ICPR), 2006, 199-203.
DRUMMOND T,CIPOLLA R. Real-time visual tracking of complex structures [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2002, 24(7): 932-946.
于起峰,尚洋. 摄像测量学原理与应用研究[M]. 北京: 科学出版社, 2009. YU Q F,SHANG Y. Videometrics: Principles and Researches [M]. Beijing: Science Press, 2009. (in Chinese)
ZHANG Z Y. A flexible new technique for camera calibration [R]. Microsoft Corporation: Technical Report, MSR-TR-98-71, 1998.