浏览全部资源
扫码关注微信
1. 中国科学院大学 北京,中国,100049
2. 中国科学院 沈阳自动化研究所,辽宁 沈阳,110016
3. 中国科学院 光电信息处理重点实验室,辽宁 沈阳,110016
收稿日期:2014-12-09,
修回日期:2015-01-22,
纸质出版日期:2015-05-25
移动端阅览
赵春阳, 赵怀慈,. 多模态鲁棒的局部特征描述符[J]. 光学精密工程, 2015,23(5): 1474-1483
ZHAO Chun-Yang, ZHAO Huai-Ci,. Multimodality robust local feature descriptors[J]. Editorial Office of Optics and Precision Engineering, 2015,23(5): 1474-1483
赵春阳, 赵怀慈,. 多模态鲁棒的局部特征描述符[J]. 光学精密工程, 2015,23(5): 1474-1483 DOI: 10.3788/OPE.20152305.1474.
ZHAO Chun-Yang, ZHAO Huai-Ci,. Multimodality robust local feature descriptors[J]. Editorial Office of Optics and Precision Engineering, 2015,23(5): 1474-1483 DOI: 10.3788/OPE.20152305.1474.
针对基于灰度的局部特征匹配方法对图像对比度变化敏感
导致在多模态图像配准应用中性能大幅下降的问题
提出了一种多模态鲁棒的局部特征描述符和匹配方法。首先
基于对比度变化不敏感的相位一致性和局部方向信息
提出一种多模态鲁棒的角点和线段特征提取方法
在对比度差异显著的多模态图像之间提取较多的共性角点和线段特征;然后
以角点为中心选择48个均匀分布的圆形特征子区域
利用角点与特征子区域内线段的距离和线段长度信息
构建96维的特征向量;最后
将归一化相关函数作为匹配测度函数进行特征匹配
并采用基于位置约束的随机抽样一致(RANSAC)方法进行匹配提纯。实验表明
本文提出的多模态匹配方法匹配正确率和重复率分别高达80%和13%
分别为对称-尺度不变特征变换算法(S-SIFT)、多模态-快速鲁棒特征算法(MM-SURF)等基于灰度方法的2~4倍和4~7倍
显著优于同类方法。
The intensity-based local feature matching methods are sensitive to image contrast variations
so the performance declines significantly when they are applied in multimodal image registration. To solve the above problem
a multimodality robust local feature descriptor was proposed and the corresponding feature matching method was developed. Firstly
an extraction method for the multimodality robust corner and line segment was proposed based on the phase congruency and local direction information insensitive to contrast variants. Compared with intensity-based method
more equivalent corners and line segments were extracted between multimodal images with more contrast differences. Then
the feature region containing of 48 circular sub-regions was selected by using the corner for a center and the 96 dimensional feature vectors were generated by using the distance values of corners and the length values of line segments located in feature sub-regions. Finally
the feature matching method based on normalized correlation function was proposed and the location constraint-based RANdom SAmple Consensus(RANSAC) algorithm was used to remove false matching point pairs. The experimental results indicate that the precision and repeatability on multimodal image matching of the proposed method reach 80% and 13% respectively. As compared with the other intensity-based image matching methods
the precision and repeatability of proposed method are 2-4 times and 4-7 times respectively those of Symmetric-Scale Invariable Feature Transformation(S-SIFT) and Multimodal-Speeded-up Robust Features(MM-SURF). It concludes that the proposed method outperforms many state-of-the-art methods significantly.
曹晓倩, 马彩文. 一种光照度不一致鲁棒立体匹配算法[J]. 机器人, 2014, 36(5): 634-640. CAO X Q, MA C W. A radiometric varying robust stereo matching algorithm [J]. ROBOT, 2014, 36(5): 634-640. (in Chinese)
刘志文, 刘定生, 刘鹏. 应用尺度不变特征变换的多源遥感影像特征点匹配[J]. 光学精密工程, 2013, 21(8): 2146-2153. LIU ZH W, LIU D SH, LIU P. SIFT feature matching algorithm of multi-source remote image [J].Opt. Precision Eng., 2013, 21(8): 2146-2153.(in Chinese)
杨桄, 童涛, 陆松岩, 等. 基于多特征的红外与可见光图像融合[J]. 光学精密工程, 2014, 22(2): 489-496. YANG G, TONG T, LU S Y, et al.. Fusion of infrared and visible images based on multi-features [J]. Opt. Precision Eng., 2014, 22(2): 489-496.(in Chinese)
SURI S, REINARTZ P. Mutual-information-based registration of TerraSAR-X and Ikonos imagery in urban areas [J].IEEE Transaction Geoscience Remote Sensing, 2010, 48(2): 939–949.
ZOU Y B, DONG F M, LE B J, et al.. Image thresholding based on template matching with arctangent Hausdorff distance measure [J]. Optics and Lasers in Engineering, 2013, 51(5): 600-609.
BODENSTEINER C, HUEBNER W, JUENGLING K, et al.. Local multi-modal image matching based on self-similarity [C]. IEEE International Conference on Image Processing, Hong Kong, 2010: 937-940.
LOWE D. Distinctive image features from scale-invariant keypoints [J].International Journal of Computer Vision, 2004, 60(2): 91–110.
BAY H, TUYTELAARS T. SURF: Speeded Up Robust Features [C]. European Conference on Computer Vision, Graz, Austria, 2006: 404-417.
CHEN J, TIAN J. Real-time multi-modal rigid registration based on a novel symmetric-SIFT descriptor [J].Progress in Natural Science, 2009, 19(5): 643-651.
ZHAO D, YANG Y, JI ZH H, et al.. Rapid multimodality registration based on MM-SURF [J]. Neurocomputing, 2014, 131(5): 87-97.
WANG ZH H, WU F CH, HU ZH Y. MSLD: A robust descriptor for line matching [J].Pattern Recognition, 2009, 42(5): 941-953.
PANG G, NEUMANN U. The Gixel Array Descriptor (GAD) for multi-modal image matching [C]. IEEE Workshop on Applications of Computer Vision, Clearwater Beach, USA, 2013: 497-504.
KOVESI P. Phase congruency: a low-level image invariant [J].Psychological Research, 2000, 64(2): 136-148.
KOVESI P. Phase congruency detects corners and edges [C].The Australian Pattern Recognition Society Conference: DICTA, 2003: 309-318.
FELSBERG M, SOMMER G. The monogenic signal [J].IEEE Transactions on Signal Processing, 2001, 49(12): 3136-3144.
GIOI R G V, JAKUBOWICZ J, MOREL J M, et al.. LSD: A fast line segment detector with a false detection control [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 4(32): 722-732.
0
浏览量
529
下载量
6
CSCD
关联资源
相关文章
相关作者
相关机构