浏览全部资源
扫码关注微信
1.清华大学 自动化系, 北京 100084
2.清华大学 深圳研究生院, 深圳 518055
[ "邹瑜(1988-), 男, 博士研究生, 2010年于清华大学获得学士学位, 主要从事空间机器人测量及计算机视觉方面的研究。E-mail:zouyu.yoyo@gmail.com" ]
[ "梁斌(1968-), 男, 博士生导师, 教授, 1994年于清华大学获得博士学位, 主要从事空间机器人控制、导航与制导以及视觉测量等方面的研究。E-mail:bliang@tsinghua.edu.cn" ]
收稿日期:2017-04-21,
录用日期:2017-6-20,
纸质出版日期:2017-11-25
移动端阅览
邹瑜, 梁斌, 王学谦, 等. 基于旋转投影二进制描述符的空间目标位姿估计[J]. 光学 精密工程, 2017,25(11):2958-2967.
Yu ZOU, Bin LIANG, Xue-qian WANG, et al. Spacetarget pose estimation based on binary rotational projection histogram[J]. Optics and precision engineering, 2017, 25(11): 2958-2967.
邹瑜, 梁斌, 王学谦, 等. 基于旋转投影二进制描述符的空间目标位姿估计[J]. 光学 精密工程, 2017,25(11):2958-2967. DOI: 10.3788/OPE.20172511.2958.
Yu ZOU, Bin LIANG, Xue-qian WANG, et al. Spacetarget pose estimation based on binary rotational projection histogram[J]. Optics and precision engineering, 2017, 25(11): 2958-2967. DOI: 10.3788/OPE.20172511.2958.
为了实现基于点云的空间目标相对位姿快速估计,提出一种旋转投影二进制描述符(BRoPH)。该描述符首先建立特征点处的局部参考坐标系,然后通过旋转投影局部点云生成不同视角下的密度图像块和深度图像块,最后根据图像块生成特征点的多尺度二进制字符串。针对位姿估计对实时性的要求,在分析BRoPH Hamming距离分布的基础上,提出了基于Hamming距离阈值的特征匹配策略,用于剔除潜在的错误配对,加快位姿估计收敛速度。最后,在基于局部特征描述符位姿估计框架下分别与SHOT描述符和FPFH描述符进行了比较。结果表明:BRoPH描述符在仅需要SHOT和FPFH平均内存1/80的基础上,得到了远高于SHOT和FPFH的平均位姿估计精度,其平均姿态误差小于0.1°,平均位置误差小于1/180
R
。此外,基于Hamming距离阈值的特征匹配策略使得BRoPH的位姿粗估计速度加快了7倍,总体位姿估计频率超过7 Hz,比SHOT和FPFH分别快3~6.8倍。该方法具有占用内存小、计算速度快、位姿估计精度高和抗干扰能力强等优点,满足基于点云的空间目标位姿估计实时性要求。
To estimate quickly the relative pose of a space target based on point cloud
a Binary Rotational Projection Histogram (BRoPH) feature descriptor was proposed. Firstly
a Local Reference Frame (LRF) for the feature point was established; Then
the density and depth images were generated under different views by rotationally projecting the local surface of feature point
Finally
the multi-scale binary string of the feature point was produced based on the images. For implementing the pose estimation of space target in real time
a Hamming distance threshold based feature matching strategy was proposed further to exclude false matching pairs to accelerate coarse pose estimation procedure. The comparison experiments were performed with SHOT descriptor and FPFH descriptor. The results demonstrate that BRoPH achieves an accurate pose estimation with only about 1/80 average memory cost of SHOT and FPFH descriptors. The average attitude error of BRoPH is under 0.1°
and its average translation error is less than 1/180R. Besides
the Hamming distance threshold based feature matching strategy speeds up the subsequent RANSAC by 7 times
and the overall pose estimation frequency exceeds 7 Hz
which is 3 to 6.8 times faster than those of SHOT and FPFH descriptors respectively. The proposed feature descriptor is compact and efficient
and the pose estimation method is accurate and robust for the requirements of space target pose estimation.
梁斌, 杜晓东, 李成, 等.空间机器人非合作航天器在轨服务研究进展[J].机器人, 2012, 34(2):242-256.
LIANG B, DU X D, LI CH, et al.. Advances in space robot on-orbit servicing for non-cooperative spacecraft[J]. Robot, 2012, 34(2):242-256. (in Chinese)
徐文福, 刘宇, 梁斌, 等.非合作航天器的相对位姿测量[J].光学 精密工程, 2009, 17(7):1570-1581.
XU W F, LIU Y, LIANG B, et al.. Measurement of relative poses between two non-cooperative spacecrafts[J]. Opt. Precision Eng., 2009, 17(7):1570-1581. (in Chinese)
苗锡奎, 朱枫, 丁庆海, 等.基于星箭对接环部件的飞行器单目视觉位姿测量方法[J].光学学报, 2013, 33(4):0412006.
MIAO X K, ZHU F, DIGN Q H, et al.. Monocular vision pose measurement based on docking ring component[J]. Acta Optica Sinica, 2013, 33(4):0412006. (in Chinese)
梁斌, 何英, 邹瑜, 等. ToF相机在空间非合作目标近距离测量中的应用[J].宇航学报, 2016, 36(9):1080-1088.
LIANG B, HE Y, ZOU Y, et al.. Application of Time-of-Flight camera for relative measurement of non-cooperative target in close range[J]. Journal of Astronautics, 2016, 37(9):1080-1088. (in Chinese)
WOODS J O, CHRISTIAN J A. Lidar-based relative navigation with respect to non-cooperative objects[J]. Acta Astronautica, 2016, 126:298-311.
BESL P J, MCKAY N D. Method for registration of 3-D shapes[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 1992, 14(2):239-256.
王欣, 张明明, 于晓, 等.应用改进迭代最近点方法的点云数据配准[J].光学 精密工程, 2012, 20(9):2068-2077.
WANG X, ZHANG M M, YU X, et al.. Point cloud registration based on improved iterative closest point method[J]. Opt. Precision Eng., 2012, 20(9):2068-2077. (in Chinese)
ALDOMA A, MARTON Z C, TOMBARI F, et al.. Tutorial:point cloud library:three-dimensional object recognition and 6 dof pose estimation[J]. IEEE Robotics & Automation Magazine, 2012, 19(3):80-91.
RHODES A, KIM E, CHRISTIAN J A, et al .. LIDAR-based relative navigation of non-cooperative objects using point cloud descriptors[C]. AIAA/AAS Astrodynamics Specialist Conference , AIAA, 2016.
JOHNSON A E, HEBERT M. Using spin images for efficient object recognition in cluttered 3D scenes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999, 21(5):433-449.
RUSU R B, BLODOW N, BEETZ M. Fast point feature histograms (FPFH) for 3D registration[C]. IEEE International Conference on Robotics and Automation , IEEE, 2009.
SALTI S, TOMBARI F, STEFANO L D. SHOT:Unique signatures of histograms for surface and texture description[J]. Computer Vision and Image Understanding, 2014, 125:251-264.
TOMBARI F, SALTI S, STEFANO L D. Unique shape context for 3D data description[C] . Proceedings of the ACM Workshop on 3 D Object Retrieval , ACM, 2010.
GUO Y L, SOHEL F, BENNAMOUN M, et al.. Rotational projection statistics for 3D local surface description and object recognition[J]. International Journal of Computer Vision, 2013, 105(1):63-86.
GUO Y L, BENNAMOUN M, SOHEL F, et al.. A comprehensive performance evaluation of 3D local feature descriptors[J]. International Journal of Computer Vision, 2016, 116(1):66-89.
GARCIA-GARCIA A, ORTS-ESCOLANO S, GARCIA-RODRIGUEZ J, et al.. Interactive 3D object recognition pipeline on mobile GPGPU computing platforms using low-cost RGB-D sensors[J]. Journal of Real-Time Image Processing, 2016:1-20.
YANG J Q, ZHANG Q, XIAO Y, et al.. TOLDI:An effective and robust approach for 3D local shape description[J]. Pattern Recognition, 2017, 65:175-187.
郭裕兰, 万建伟, 鲁敏, 等.激光雷达目标三维姿态估计[J].光学 精密工程, 2012, 20(4):843-850.
GUO Y L, WAN J W, LU M, et al.. Three dimensional orientation estimation for ladar target[J]. Opt. Precision Eng., 2012, 20(4):843-850. (in Chinese)
HEIKKILÄ M, PIETIKÄINEN M, SCHMID C. Description of interest regions with center-symmetric local binary patterns[C]. Computer Vision , Graphics and Image Processing , Indian Conference , ICVGIP, 2006.
SCHNITZER F, JANSCHEK K, WILLICH G. Experimental results for image-based geometrical reconstruction for spacecraft rendezvous navigation with unknown and uncooperative target spacecraft[C]. IEEE International Conference on Intelligent Robots and Systems , IEEE, 2012, 5040-5045.
沈萦华, 李卓嘉, 杨成, 等.基于法向特征直方图的点云配准算法[J].光学 精密工程, 2015, 23(10Z):591-598.
SHEN Y H, LI ZH J, YANG CH, et al.. Point cloud registration with normal feature histogram[J]. Opt. Precision Eng., 2015, 23(10Z):591-598. (in Chinese)
SEGAL A V, HAEHNEL D, THRUN S. Generalized-ICP[C]. Proceedings of Robotics : Science and Systems , 2009.
KATZ S, TAL A, BASRI R. Direct visibility of point sets[J]. ACM Transactions on Graphics, 2007, 26(3):1276377-1276407.
RUSU R B, COUSINS S. 3D is here:Point Cloud Library (PCL)[C]. IEEE International Conference on Robotics and Automation , IEEE, 2011.
0
浏览量
652
下载量
5
CSCD
关联资源
相关文章
相关作者
相关机构