浏览全部资源
扫码关注微信
1. 北京航空航天大学 仪器科学与光电工程学院, 北京 100083
2. 宇航智能控制技术国家级重点实验室, 北京 100854
屈玉福(1976-), 男, 陕西靖边人, 博士, 硕士生导师, 2001年, 2004年于哈尔滨工业大学分别获得硕士、博士学位, 2004~2006年在北京航空航天大学仪器科学与光电工程学院从事博士后研究, 主要从事光电检测技术与计算成像技术方面的研究。E-mail:qyf@buaa.edu.cn E-mail:qyf@buaa.edu.cn
[ "刘子悦(1993-), 女, 北京人, 硕士研究生, 2014年于北京航空航天大学获得学士学位, 主要从事目标识别、图像处理等方面的研究。E-mail:liuziyue008@126.com " ]
收稿日期:2016-06-30,
录用日期:2016-8-30,
纸质出版日期:2017-01-25
移动端阅览
屈玉福, 刘子悦, 江云秋, 等. 自适应变尺度特征点提取方法[J]. Editorial Office of Optics and Precision Engineeri, 2017,25(1):188-197.
Yu-fu QU, Zi-yue LIU, Yun-qiu JIANG, et al. Self-adaptative variable-metric feature point extraction method[J]. Optics and precision engineering, 2017, 25(1): 188-197.
屈玉福, 刘子悦, 江云秋, 等. 自适应变尺度特征点提取方法[J]. Editorial Office of Optics and Precision Engineeri, 2017,25(1):188-197. DOI: 10.3788/OPE.20172501.0188.
Yu-fu QU, Zi-yue LIU, Yun-qiu JIANG, et al. Self-adaptative variable-metric feature point extraction method[J]. Optics and precision engineering, 2017, 25(1): 188-197. DOI: 10.3788/OPE.20172501.0188.
为了提高特征点匹配速度,设计了一种自适应变尺度构造图像金字塔的特征点提取方法。该方法采用FAST特征点数量作为尺度空间信息量的度量,利用相邻两层模糊图像的信息量差作为金字塔分层依据,通过调整尺度参数,使相邻图像间的细节特征均匀变化;并使用匹配点数量阈值控制金字塔的高度,设计利用“边匹配,边构造”的图像匹配策略来提高特征匹配的效率。最后,将所设计方法与SIFT、FAST、ASIFT三种特征提取方法进行比较。实验结果表明:所设计方法在变尺度条件下的正确匹配率可以达到43.59%,与SIFT相比提高了25.51%,提取的特征点在目标经历各种光照、角度等变化之后仍能正确表示目标。本文所设计方法根据目标图像特点自适应选择参数,不需要人工调整就可获得理想的匹配效果,能适应各种变化条件下的特征提取和匹配工作,并能提高特征提取和匹配效率。
A feature point extraction method for self-adaptative variable-metric constructing image pyramid is proposed to accelerate the feature matching. In this method
number of FAST feature points is adopted as information content quantization in scale space representation and pyramid hierarchy is carried out according to the information difference of blurred images in the neighboring layers. By adjusting scale parameters
Uniform change of detail feature in neighboring images is realized
number threshold of matching points is used to control the height of pyramid and matching efficiency is improved by applying matching instruction strategy named "matching and constructing at the same time". Last
The contrast experiment is implemented between proposed method and three detection methods-SIFT
FAST
and ASIFT. The experiment results indicate that correct matching rate of the method can reach 43.59% under various scales. It increase by 25.51% compared with SIFT. Feature points can still show the targets correctly after they underwent all kinds of changes in lights and angles. The method referred to in the paper selects parameters adaptively according to the feature of target image. It can obtain ideal matching effects without manual adjustment and adapt to feature extraction and matching in various changeable conditions in high efficiency.
MORAVEC H P. Rover visual obstacle avoidance[C].International Joint Conference on Artificial Intelligence, 2014:785-790.
HARRIS C. A combined corner and edge detector[C]. Alvey Vision Conference, 1988:147-151.
SIRISHA B, SANDHYA B. Evaluation of distinctive color features from harris corner key points[C].3rd IEEE International Advance Computing Conference(IACC),Ghazicbad, INDIA 2013, 2013:22-23,FEB.
LUO Z. Survey of corner detection techniques in image processing[J]. International Journal of Recent Technology and Engineering, 2013,2(2):2277.
ROSTEN E, PORTER R, DRUMMOND T. Faster and better:a machine learning approach to corner detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(1):105-119.
王飞宇,邸男,贾平. 结合尺度空间FAST角点检测器和SURF描绘器的图像特征[J]. 液晶与显示, 2014, 29(4):598-604.
WANG F Y, DI N, JIA P. Image features using scale-space FAST corner detector and SURF descriptor[J]. Chinese Journal of Liquid Crystals and Displays, 2014, 29(4):598-604.(in Chinese)
RUBLEE E, RABAUD V, KONOLIGE K, et al.. ORB:an efficient alternative to SIFT or SURF[J]. 2011 International Conference on Computer Vision(ICCV), 2011:2564-2571.
聂海涛,龙科慧,马军, 等. 采用改进尺度不变特征变换在多变背景下实现快速目标识别[J]. 光学精密工程, 2015, 23(8):2349-2356.
NIE H T, LONG K H, MA J,et al.. Fast object recognition under multiple varying background using improved SIFT method[J]. Opt. Precision Eng., 2015, 23(8):2349-2356.(in Chinese)
ZHUANG Z, WANG H. A novel nonuniformity correction algorithm based on speeded up robust features extraction[J]. Infrared Physics & Technology, 2015, 73:281-285.
SALAHAT E N, SALEH H H M, SLUZEK A S, et al.. Architecture and method for real-time parallel detection and extraction of maximally stable extremal regions(MSERS), US:2016070970-A1[P]. 2016.
贾平,徐宁,张叶. 基于局部特征提取的目标自动识别[J]. 光学精密工程, 2013, 21(7):1898-1905.
JIA P, XU N, ZHANG Y. Automatic target recognition based on local feature extraction[J]. Opt. Precision Eng., 2013, 21(7):1898-1905.(in Chinese)
LINDEBERG T. Scale-space Theory in Computer Vision[M].Springer Science & Business Media, 2013.
LINDEBERG T. Feature detection with automatic scale selection[J]. International Journal of Computer Vision, 1998, 30(2):79-116.
王灿进,孙涛,陈娟. 局部不变特征匹配的并行加速技术研究[J]. 液晶与显示, 2014, 29(2):266-274.
WANG C J, SUN T, CHEN J. Speeding up local invariant feature matching using parallel technology[J]. Chinese Journal of Liquid Crystals and Displays, 2014, 29(2):266-274.(in Chinese)
LOWE D G. Object recognition from local scale-invariant features[C]. IEEE International Conference on the Proceedings of the Seventh, 1999,2:1150-1157.
YU G, MOREL J M. ASIFT:an algorithm for fully affine invariant comparison[J]. Image Processing on Line, 2011, 1:2105-1232.
王永明,王贵锦. 图像局部不变性特征与描述[M]. 北京:国防工业出版社, 2010.
WANG Y M, WANG G J. Image Local Invariant Features and Descriptors[M]. Beijing:National Defence Industry Press, 2010.(in Chinese)
MARR D. Representing visual information (A)[J].Journal of Optical Society of America,1977,10(10):1400.
LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(60):91-110.
PERONA P, MALIK J. Scale-space and edge detection using anisotropic diffusion[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 1990, 12(7):629-639.
GUICHARD F, MONASSE P.Fast computation of a contrast-invariant image representation[J]. IEEE Transactions on Image Processing A,2000, 11(3):121-123.
BIADGIE Y, SOHN K A. Feature detector using adaptive accelerated segment test[C].2014 International Conference in Information Science and Applications(ICISA), 2014:1-4.
ROSTEN E, PORTER R, DRUMMOND T. Faster and better:a machine learning approach to corner detection[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2008, 32(1):105-19.
SUN J, THORPE C, XIE N H, et al.. Object category classification using occluding contours[J]. Lecture Notes in Computer Science, 2010, 6453:296-305.
TUYTELAARS T, MIKOLAJCZYK K. Local invariant feature detectors:a survey[J]. Foundations & Trends in Computer Graphics & Vision, 2007, 3(3):177-280.
李实秋,雷建军,周志远, 等. 基于SIFT匹配的多视点立体图像零视差调整[J]. 红外与激光工程, 2015, 44(2):764-768.
LI SH Q, LEI J J, ZHOU ZH Y,et al.. Zero-disparity adjustment of Multiview stereoscopic images based on SIFT matching[J]. Infrared and Laser Engineering. 2015, 44(2):764-768.(in Chinese)
赵爱罡,王宏力,杨小冈, 等. 融合几何特征的压缩感知SIFT描述子[J]. 红外与激光工程, 2015, 44(3):1085-1091.
ZHAO A G, WANG H L, YANG X G,et al.. Compressed sense SIFT descriptor mixed with geometrical feature[J]. Infrared and Laser Engineering. 2015, 44(3):1085-1091.(in Chinese)
赵春阳,赵怀慈. 多模态鲁棒的局部特征描述符[J]. 光学精密工程, 2015,23(5):1474-1483.
ZHAO CH Y, ZH H C. Multimodality robust local feature descriptors[J].Opt. Precision Eng., 2015,23(5):1474-1483.(in Chinese)
MIKOLAJCZYK K, TUYTELAARS T, SCHMID C. A comparison of affine region detectors[J]. International Journal of Computer Vision, 2005,65(1):63-72.
0
浏览量
396
下载量
7
CSCD
关联资源
相关文章
相关作者
相关机构