浏览全部资源
扫码关注微信
兰州交通大学 电子与信息工程学院,甘肃 兰州730070
Received:17 August 2022,
Revised:26 September 2022,
Published:25 May 2023
移动端阅览
杨艳春,李永萍,党建武等.基于快速交替引导滤波和CNN的红外与可见光图像融合[J].光学精密工程,2023,31(10):1548-1562.
YANG Yanchun,LI Yongping,DANG Jianwu,et al.Infrared and visible image fusion based on fast alternating guided filtering and CNN[J].Optics and Precision Engineering,2023,31(10):1548-1562.
杨艳春,李永萍,党建武等.基于快速交替引导滤波和CNN的红外与可见光图像融合[J].光学精密工程,2023,31(10):1548-1562. DOI: 10.37188/OPE.20233110.1548.
YANG Yanchun,LI Yongping,DANG Jianwu,et al.Infrared and visible image fusion based on fast alternating guided filtering and CNN[J].Optics and Precision Engineering,2023,31(10):1548-1562. DOI: 10.37188/OPE.20233110.1548.
为了解决红外与可见光图像融合中出现细节信息丢失,边缘模糊以及伪影的问题,本文提出一种快速交替引导滤波,在保证融合图像质量的前提下有效提高运行效率,结合CNN(卷积神经网络)以及红外特征提取进行有效的融合。首先,对源图像利用四叉树分解和贝塞尔插值来提取红外亮度特征结合可见光图像得到初始融合图像。其次,通过快速交替引导滤波获取源图像的基础层与细节层信息,基础层通过CNN与拉普拉斯变换得到融合后的基础图像,细节层通过显著性测量的方法得到融合后的细节图像。最后,将初始融合图、基础融合图以及细节融合图进行相加得到最终融合结果。本算法涉及到的快速交替引导滤波以及特征提取性能使得最终融合结果中包含丰富的纹理细节信息,边缘清晰。经实验表明,本算法所得融合结果在视觉方面具有较好的保真度,客观评价指标较对比方法其信息熵、标准差、空间频率、小波特征互信息、视觉保真度以及平均梯度分别平均提高了9.9%,6.8%,43.6%,11.3%,32.3%,47.1%。
In order to solve the problems of the loss of detail information, blurred edges, and artifacts in infrared and visible image fusion, this paper proposes a fast alternating guided filter, which significantly increases the operation efficiency while ensuring the quality of the fused image. The proposed filer combines a convolutional neural network (CNN) and infrared feature extraction effective fusion. First, quadtree decomposition and Bessel interpolation are used to extract the infrared brightness features of the source images, and the initial fusion image is obtained by combining the visible image. Second, the information of the base layer and the detail layer of the source images is obtained through fast alternating guided filtering. The base layer obtains the fused base image through the CNN and Laplace transform, and the detail layer obtains the fused detail image through the saliency measurement method. Finally, the initial fusion map, basic fusion map, and detail fusion map are added to obtain the final fusion result. Because of the fast alternating guided filtering and feature extraction performance of this algorithm, the final fusion result contains rich texture details and clear edges. The experimental results indicate that the fusion results obtained by the algorithm have good fidelity in vision, and its objective evaluation indicators are compared with those of other methods. The information entropy, standard deviation, spatial frequency, wavelet feature mutual information, visual fidelity, and average gradient show improvements by 9.9%, 6.8%, 43.6%, 11.3%, 32.3%, and 47.1%, respectively, on average.
刘先红 , 陈志斌 , 秦梦泽 . 结合引导滤波和卷积稀疏表示的红外与可见光图像融合 [J]. 光学 精密工程 , 2018 , 26 ( 5 ): 1242 - 1253 . doi: 10.3788/OPE.20182605.1242 http://dx.doi.org/10.3788/OPE.20182605.1242
LIU X H , CHEN Z B , QIN M Z . Infrared and visible image fusion using guided filter and convolutional sparse representation [J]. Opt. Precision Eng. , 2018 , 26 ( 5 ): 1242 - 1253 . (in Chinese) . doi: 10.3788/OPE.20182605.1242 http://dx.doi.org/10.3788/OPE.20182605.1242
HU W R , YANG Y H , ZHANG W S , et al . Moving object detection using tensor-based low-rank and saliently fused-sparse decomposition [J]. IEEE Transactions on Image Processing , 2017 , 26 ( 2 ): 724 - 737 . doi: 10.1109/tip.2016.2627803 http://dx.doi.org/10.1109/tip.2016.2627803
LI S , KANG X , FANG L , et al . Pixel-level image fusion: a survey of the state of the art [J]. Information Fusion , 2017 , 33 : 100 - 112 . doi: 10.1016/j.inffus.2016.05.004 http://dx.doi.org/10.1016/j.inffus.2016.05.004
MA C , MIAO Z J , ZHANG X P , et al . A saliency prior context model for real-time object tracking [J]. IEEE Transactions on Multimedia , 2017 , 19 ( 11 ): 2415 - 2424 . doi: 10.1109/tmm.2017.2694219 http://dx.doi.org/10.1109/tmm.2017.2694219
王昕 , 吉桐伯 , 刘富 . 结合目标提取和压缩感知的红外与可见光图像融合 [J]. 光学 精密工程 , 2016 , 24 ( 7 ): 1743 - 1753 . doi: 10.3788/ope.20162407.1743 http://dx.doi.org/10.3788/ope.20162407.1743
WANG X , JI T B , LIU F . Fusion of infrared and visible images based on target segmentation and compressed sensing [J]. Opt. Precision Eng. , 2016 , 24 ( 7 ): 1743 - 1753 . (in Chinese) . doi: 10.3788/ope.20162407.1743 http://dx.doi.org/10.3788/ope.20162407.1743
张蕾 , 金龙旭 , 韩双丽 , 等 . 采用非采样Contourlet变换与区域分类的红外和可见光图像融合 [J]. 光学 精密工程 , 2015 , 23 ( 3 ): 810 - 818 . doi: 10.3788/ope.20152303.0810 http://dx.doi.org/10.3788/ope.20152303.0810
ZHANG L , JIN L X , HAN S L , et al . Fusion of infrared and visual images based on non-sampled Contourlet transform and region classification [J]. Opt. Precision Eng. , 2015 , 23 ( 3 ): 810 - 818 . (in Chinese) . doi: 10.3788/ope.20152303.0810 http://dx.doi.org/10.3788/ope.20152303.0810
AZARANG A , MANOOCHEHRI H E , KEHTARNAVAZ N . Convolutional autoencoder-based multispectral image fusion [J]. IEEE Access , 2019 , 7 : 35673 - 35683 . doi: 10.1109/access.2019.2905511 http://dx.doi.org/10.1109/access.2019.2905511
HOU R C , ZHOU D M , NIE R C , et al . VIF-net: an unsupervised framework for infrared and visible image fusion [J]. IEEE Transactions on Computational Imaging , 2020 , 6 : 640 - 651 . doi: 10.1109/tci.2020.2965304 http://dx.doi.org/10.1109/tci.2020.2965304
LIU Y , CHEN X , PENG H , et al . Multi-focus image fusion with a deep convolutional neural network [J]. Information Fusion , 2017 , 36 : 191 - 207 . doi: 10.1016/j.inffus.2016.12.001 http://dx.doi.org/10.1016/j.inffus.2016.12.001
MA J , YU W , LIANG P , et al . FusionGAN: a generative adversarial network for infrared and visible image fusion [J]. Information Fusion , 2019 , 48 : 11 - 26 . doi: 10.1016/j.inffus.2018.09.004 http://dx.doi.org/10.1016/j.inffus.2018.09.004
AN W B , WANG H M . Infrared and visible image fusion with supervised convolutional neural network [J]. Optik , 2020 , 219 : 165120 . doi: 10.1016/j.ijleo.2020.165120 http://dx.doi.org/10.1016/j.ijleo.2020.165120
ZHANG Y , LIU Y , SUN P , et al . IFCNN: a general image fusion framework based on convolutional neural network [J]. Information Fusion , 2020 , 54 : 99 - 118 . doi: 10.1016/j.inffus.2019.07.011 http://dx.doi.org/10.1016/j.inffus.2019.07.011
郝永平 , 曹昭睿 , 白帆 , 等 . 基于兴趣区域掩码卷积神经网络的红外-可见光图像融合与目标识别算法研究 [J]. 光子学报 , 2021 , 50 ( 2 ): 0210002 .
HAO Y P , CAO Z R , BAI F , et al . Research on infrared visible image fusion and target recognition algorithm based on region of interest mask convolution neural network [J]. Acta Photonica Sinica , 2021 , 50 ( 2 ): 0210002 . (in Chinese)
ZHANG Q , SHEN X , XU L , et al . Rolling Guidance Filter [C]. Computer Vision-ECCV 2014 : 13th European Conference, Zurich, Switzerland, September 6- 12 , 2014, Proceedings, Part III 13. Springer International Publishing , 2014: 815 - 830 . doi: 10.1007/978-3-319-10578-9_53 http://dx.doi.org/10.1007/978-3-319-10578-9_53
TOET A . Alternating guided image filtering [J]. PeerJ Computer Science , 2016 , 2 : e72 . doi: 10.7717/peerj-cs.72 http://dx.doi.org/10.7717/peerj-cs.72
ZHAI Y , SHAH M . Visual Attention Detection in Video Sequences Using Spatiotemporal Cues [C]. Proceedings of the 14th ACM international conference on Multimedia. October 23 - 27 , 2006, Santa Barbara, CA, USA. New York : ACM , 2006: 815 - 824 . doi: 10.1145/1180639.1180824 http://dx.doi.org/10.1145/1180639.1180824
TOET A, TNO image fusion dataset [DB/OL]. ( 2014 ). https://figshare.com/articles/dataset/TNO_Image_Fusion_Dataset/1008029 https://figshare.com/articles/dataset/TNO_Image_Fusion_Dataset/1008029
MA J , ZHOU Z , WANG B , et al . Infrared and visible image fusion based on visual saliency map and weighted least square optimization [J]. Infrared Physics & Technology , 2017 , 82 : 8 - 17 . doi: 10.1016/j.infrared.2017.02.005 http://dx.doi.org/10.1016/j.infrared.2017.02.005
MA J Y , ZHOU Y . Infrared and visible image fusion via gradientlet filter [J]. Computer Vision and Image Understanding , 2020 , 197 / 198 : 103016 .
LI H , WU X J , KITTLER J . Infrared and Visible Image Fusion Using a Deep Learning Framework [C]. 2018 24th International Conference on Pattern Recognition (ICPR) . 20 - 24 , 2018, Beijing, China. IEEE , 2018: 2705 - 2710 . doi: 10.1109/icpr.2018.8546006 http://dx.doi.org/10.1109/icpr.2018.8546006
李恒 , 张黎明 , 蒋美容 , 等 . 一种基于ResNet152的红外与可见光图像融合算法 [J]. 激光与光电子学进展 , 2020 , 57 ( 8 ): 081013 . doi: 10.3788/lop57.081013 http://dx.doi.org/10.3788/lop57.081013
LI H , ZHANG L M , JIANG M R , et al . An infrared and visible image fusion algorithm based on ResNet152 [J]. Laser & Optoelectronics Progress , 2020 , 57 ( 8 ): 081013 . (in Chinese) . doi: 10.3788/lop57.081013 http://dx.doi.org/10.3788/lop57.081013
TANG L , YUAN J , ZHANG H , et al . PIAFusion: a progressive infrared and visible image fusion network based on illumination aware [J]. Information Fusion , 2022 , 83/84 : 79 - 92 . doi: 10.1016/j.inffus.2022.03.007 http://dx.doi.org/10.1016/j.inffus.2022.03.007
VAN AARDT J A , AHMED F B . Assessment of image fusion procedures using entropy, image quality, and multispectral classification [J]. Journal of Applied Remote Sensing , 2008 , 2 ( 1 ): 023522 . doi: 10.1117/1.2945910 http://dx.doi.org/10.1117/1.2945910
ESKICIOGLU A M , FISHER P S . Image quality measures and their performance [J]. IEEE Transactions on Communications , 1995 , 43 ( 12 ): 2959 - 2965 . doi: 10.1109/26.477498 http://dx.doi.org/10.1109/26.477498
HAGIGHAT M , RAZIAN M A . Fast-FMI: non-reference image fusion metric [C]. 2014 8th International Conference on Application of Information and Communication Technologies(AICT) . 15172014 , Astana, Kazakhstan. New York : IEEE , 2014 : 1 - 3 . doi: 10.1109/icaict.2014.7036000 http://dx.doi.org/10.1109/icaict.2014.7036000
HAN Y , CAI Y , CAO Y , et al . A new image fusion performance metric based on visual information fidelity, Inf . Fusion 14 (2) (2013) 127 - 135 . doi: 10.1016/j.inffus.2011.08.002 http://dx.doi.org/10.1016/j.inffus.2011.08.002
0
Views
93
下载量
2
CSCD
Publicity Resources
Related Articles
Related Author
Related Institution