浏览全部资源
扫码关注微信
长安大学 工程机械学院,陕西 西安 710064
Received:04 May 2023,
Revised:16 June 2023,
Published:25 December 2023
移动端阅览
夏晓华,赵倩,向华涛等.多聚焦图像离焦模糊区域的SIFT特征提取[J].光学精密工程,2023,31(24):3630-3639.
XIA Xiaohua,ZHAO Qian,XIANG Huatao,et al.SIFT feature extraction method for the defocused blurred area of multi-focus images[J].Optics and Precision Engineering,2023,31(24):3630-3639.
夏晓华,赵倩,向华涛等.多聚焦图像离焦模糊区域的SIFT特征提取[J].光学精密工程,2023,31(24):3630-3639. DOI: 10.37188/OPE.20233124.3630.
XIA Xiaohua,ZHAO Qian,XIANG Huatao,et al.SIFT feature extraction method for the defocused blurred area of multi-focus images[J].Optics and Precision Engineering,2023,31(24):3630-3639. DOI: 10.37188/OPE.20233124.3630.
常规的尺度不变特征变换(SIFT)图像特征提取方法难以提取多聚焦图像离焦模糊区域的特征,使得图像间存在局部、少量的公共特征,导致多聚焦图像配准精度差,严重影响后续图像融合和三维重建质量。在分析图像离焦模糊区域特征提取不确定性的基础上,提出了一种多聚焦图像离焦模糊区域的SIFT特征提取方法。首先提取多聚焦图像聚焦清晰区域的SIFT特征,再利用光流跟踪提取对应离焦模糊区域的SIFT特征,避免了在离焦模糊区域直接提取SIFT特征的不确定性。实验结果表明:提出的方法在离焦模糊区域具有良好的SIFT特征提取能力和提取精度,能实现多聚焦图像SIFT特征匹配数量显著增长,SIFT特征提取的误差为0.03~0.39 pixels,优于现有方法的0.21~1.71 pixels。降低了离焦模糊区域SIFT特征提取的不确定性,为多聚焦图像精确配准奠定了基础。
Conventional SIFT image feature extraction methods have difficulty in extracting features from the defocused blurred area of multi-focus images. As a result, common features between images are local and few, leading to poor accuracy in multi-focus image registration, which seriously affects the quality of subsequent image fusion and 3D reconstruction. Based on analyzing the uncertainty of feature extraction from the defocused blurred areas of images, a feature extraction method is proposed for the defocused blurred area of multi-focus images. First, features are extracted from the focused clear area of multi-focus images. Subsequently, the features in the corresponding defocused blurred area are extracted using optical flow tracking, thereby avoiding the uncertainty of directly extracting features from the defocused blurred area. Experimental results show that the proposed method displays good feature extraction ability and accuracy in the defocused blurred area, significantly increasing the number of features matches. Feature extraction error ranges between 0.03-0.39 pixels, which is better than the 0.21-1.71 pixels of existing methods. This indicates a reduction in the uncertainty of feature extraction from the defocused blurred area, making it suitable for multi-focus image registration.
LI B , PENG H , WANG J , et al . Multi-focus image fusion based on dynamic threshold neural P systems and surfacelet transform [J]. Knowledge-Based Systems , 2020 , 196 : 105794 . doi: 10.1016/j.knosys.2020.105794 http://dx.doi.org/10.1016/j.knosys.2020.105794
XIA X , FANG S , XIAO Y . High resolution image fusion algorithm based on multi-focused region extraction [J]. Pattern Recognition Letters , 2014 , 45 : 115 - 120 . doi: 10.1016/j.patrec.2014.03.018 http://dx.doi.org/10.1016/j.patrec.2014.03.018
XIA X , YAO Y , YIN L , et al . Multi-focus image fusion based on probability filtering and region correction [J]. Signal Processing , 2018 , 153 : 71 - 82 . doi: 10.1016/j.sigpro.2018.07.004 http://dx.doi.org/10.1016/j.sigpro.2018.07.004
ALI U , LEE I H , MAHMOOD M T . Guided image filtering in shape-from-focus: a comparative analysis [J]. Pattern Recognition , 2021 , 111 : 107670 . doi: 10.1016/j.patcog.2020.107670 http://dx.doi.org/10.1016/j.patcog.2020.107670
NAKAJIMA Y , TANIGAKI N , SUGINO T , et al . A small and high-speed driving mechanism for 3D shape measurement in monocular endoscopy [J]. Sensors , 2021 , 21 ( 14 ): 4887 . doi: 10.3390/s21144887 http://dx.doi.org/10.3390/s21144887
YAN T , HU Z , QIAN Y , et al . 3D shape reconstruction from multifocus image fusion using a multidirectional modified Laplacian operator [J]. Pattern Recognition , 2020 , 98 : 107065 . doi: 10.1016/j.patcog.2019.107065 http://dx.doi.org/10.1016/j.patcog.2019.107065
LIU Y , LIU S , WANG Z . Multi-focus image fusion with dense SIFT [J]. Information Fusion , 2015 , 23 : 139 - 155 . doi: 10.1016/j.inffus.2014.05.004 http://dx.doi.org/10.1016/j.inffus.2014.05.004
LIU C , YUEN J , TORRALBA A . SIFT Flow: Dense Correspondence Across Scenes and its Application s [M]. Dense Image Correspondences for Computer Vision . Cham : Springer International Publishing , 2016 : 15 - 49 . doi: 10.1007/978-3-319-23048-1_2 http://dx.doi.org/10.1007/978-3-319-23048-1_2
ŞEKEROĞLU K , SOYSAL Ö M . Comparison of SIFT, Bi-SIFT, and Tri-SIFT and their frequency spectrum analysis [J]. Machine Vision and Applications , 2017 , 28 ( 8 ): 875 - 902 . doi: 10.1007/s00138-017-0868-9 http://dx.doi.org/10.1007/s00138-017-0868-9
丁国绅 , 乔延利 , 易维宁 , 等 . 基于高光谱图像的改进SIFT特征提取与匹配 [J]. 光学 精密工程 , 2020 , 28 ( 4 ): 954 - 962 . doi: 10.3788/OPE.20202804.0954 http://dx.doi.org/10.3788/OPE.20202804.0954
DING G S , QIAO Y L , YI W N , et al . Improved SIFT feature extraction and matching technology based on hyperspectral image [J]. Opt. Precision Eng. , 2020 , 28 ( 4 ): 954 - 962 . (in Chinese) . doi: 10.3788/OPE.20202804.0954 http://dx.doi.org/10.3788/OPE.20202804.0954
XIA X , DANG G , YAO Y , et al . Image registration model and algorithm for multi-focus images [J]. Pattern Recognition Letters , 2017 , 86 : 26 - 30 . doi: 10.1016/j.patrec.2016.12.005 http://dx.doi.org/10.1016/j.patrec.2016.12.005
LIU Y , CHEN X , PENG H , et al . Multi-focus image fusion with a deep convolutional neural network [J]. Information Fusion , 2017 , 36 : 191 - 207 . doi: 10.1016/j.inffus.2016.12.001 http://dx.doi.org/10.1016/j.inffus.2016.12.001
XIAO B , XU B C , BI X L , et al . Global-feature encoding U-net (GEU-net) for multi-focus image fusion [J]. IEEE Transactions on Image Processing , 2021 , 30 : 163 - 175 . doi: 10.1109/tip.2020.3033158 http://dx.doi.org/10.1109/tip.2020.3033158
ZHANG Q , LI G , CAO Y , et al . Multi-focus image fusion based on non-negative sparse representation and patch-level consistency rectification [J]. Pattern Recognition , 2020 , 104 : 107325 . doi: 10.1016/j.patcog.2020.107325 http://dx.doi.org/10.1016/j.patcog.2020.107325
BAY H , TUYTELAARS T , VAN GOOL L . SURF : Speeded Up Robust Features [M]. Computer Vision-ECCV 2006 . Berlin, Heidelberg : Springer Berlin Heidelberg , 2006 : 404 - 417 . doi: 10.1007/11744023_32 http://dx.doi.org/10.1007/11744023_32
RUBLEE E , RABAUD V , KONOLIGE K , et al . ORB: an Efficient Alternative to SIFT or SURF [C]. 2011 International Conference on Computer Vision . 6 - 13 , 2011, Barcelona, Spain. IEEE , 2012: 2564 - 2571 . doi: 10.1109/iccv.2011.6126544 http://dx.doi.org/10.1109/iccv.2011.6126544
李鹤宇 , 王青 . 一种具有实时性的SIFT特征提取算法 [J]. 宇航学报 , 2017 , 38 ( 8 ): 865 - 871 . doi: 10.3873/j.issn.1000-1328.2017.08.011 http://dx.doi.org/10.3873/j.issn.1000-1328.2017.08.011
LI H Y , WANG Q . A real-time SIFT feature extraction algorithm [J]. Journal of Astronautics , 2017 , 38 ( 8 ): 865 - 871 . (in Chinese) . doi: 10.3873/j.issn.1000-1328.2017.08.011 http://dx.doi.org/10.3873/j.issn.1000-1328.2017.08.011
0
Views
217
下载量
1
CSCD
Publicity Resources
Related Articles
Related Author
Related Institution