浏览全部资源
扫码关注微信
兰州交通大学 电子与信息工程学院,甘肃 兰州 730070
[ "杨艳春(1979-),女,新疆五家渠人,副教授,博士,硕士生导师,2002年于兰州交通大学获得工学学士,2007年于兰州交通大学获得硕士学位,2014年于兰州交通大学获得博士学位,目前研究的方向是图像融合和图像处理。E-mail: yangyanchun102@sina.com" ]
[ "高晓宇(1997-),男,内蒙古乌兰察布人,硕士研究生,2019年于兰州交通大学大学获得理学学士,主要研究方向是图像融合和图像处理。E-mail: xiaoyu19971101@163.com" ]
收稿日期:2021-05-27,
修回日期:2021-07-01,
移动端阅览
杨艳春,高晓宇,党建武等.基于WEMD和生成对抗网络重建的红外与可见光图像融合[J].光学精密工程,
YANG Yan-chun,GAO Xiao-yu,DANG Jian-wu,et al.Infrared and visible image fusion based on WEMD and generative adversarial network reconstruction[J].Optics and Precision Engineering,
杨艳春,高晓宇,党建武等.基于WEMD和生成对抗网络重建的红外与可见光图像融合[J].光学精密工程, DOI:10.37188/OPE..0001
YANG Yan-chun,GAO Xiao-yu,DANG Jian-wu,et al.Infrared and visible image fusion based on WEMD and generative adversarial network reconstruction[J].Optics and Precision Engineering, DOI:10.37188/OPE..0001
针对红外与可见光图像融合中边缘模糊、对比度较低的问题,提出一种二维窗口经验模式分解(WEMD)和生成对抗网络(GAN)重建的红外与可见光图像融合算法。首先,将红外和可见光图像进行WEMD分解得到内蕴模式函数分量(IMF)和残余分量,将IMF分量通过主成分分析进行融合,残余分量用加权平均进行融合,重构得到初步融合图像,再将初步融合图像输入到GAN中与可见光图像进行对抗博弈,补全一些背景信息,得到最终的融合图像。其中客观评价指标分别运用平均梯度(AG),边缘强度(EI),熵值(EN),结构相似性(SSIM),和互信息(MI)与其他五种方法相比平均提高了46.13%,39.40%,19.91%,3.72%,33.1%。实验结果表明,本文算法较好地保留了源图像的边缘及纹理细节信息,同时突出了红外图像的目标,具有较好的可视性,而且在客观评价指标方面也有明显的优势。
Aiming at the problem of blurred edges and low contrast in the fusion of infrared and visible images, a two-dimensional window empirical mode decomposition (WEMD) and Infrared and visible light image fusion algorithm for GAN reconstruction is proprosed. First, the infrared and visible light images are decomposed by WEMD to obtain intrinsic mode function components (IMF) and residual components. The IMF components are fused through principal component analysis, and the residual components are fused by weighted average.The preliminary fused image is reconstructed.The preliminary fusion image is input into the GAN to play against the visible light image, and some background information is supplemented to obtain the final fusion image. The average gradient (AG), edge strength (EI), entropy (EN), structural similarity (SSIM), and mutual information (MI) were used for objective evaluation, respectively, which increased by 46.13%, 39.40%, 19.91%, 3.72%, and 33.1% compared with the other five methods.The experimental results show that the algorithm in this paper better retains the edge and texture details of the sources image, and at the same time highlights the target of the infrared image, has better visibility, and has obvious advantages in objective evaluation indicators.
刘先红 , 陈志斌 , 秦梦泽 . 结合引导滤波和卷积稀疏表示的红外与可见光图像融合 [J]. 光学 精密工程 , 2018 , 26 ( 5 ): 1242 - 1253 .
LIU X H , CHEN ZH B , QIN M Z . Infrared and visible image fusion using guided filter and convolutional sparse representation [J]. Optics and Precision Engineering , 2018 , 26 ( 5 ): 1242 - 1253 . (in Chinese)
SINGH R , VATSA M , NOORE A . Integrated multilevel image fusion and match score fusion of visible and infrared face images for robust face recognition [J]. Pattern Recognition , 2008 , 41 ( 3 ): 880 - 893 .
HAN J , BHANU B . Fusion of color and infrared video for moving human detection [J]. Pattern Recognition , 2007 , 40 ( 6 ): 1771 - 1784 .
REINHARD E , ADHIKHMIN M , GOOCH B , et al . Color transfer between images [J]. IEEE Computer Graphics and Applications , 2001 , 21 ( 5 ): 34 - 41 .
冯维 , 吴贵铭 , 赵大兴 , 等 . 多图像融合Retinex用于弱光图像增强 [J]. 光学 精密工程 , 2020 , 28 ( 3 ): 736 - 744 .
FENG W , WU G M , ZHAO D X , et al . Multi images fusion Retinex for low light image enhancement [J]. Optics and Precision Engineering , 2020 , 28 ( 3 ): 736 - 744 . (in Chinese)
殷明 , 段普宏 , 褚标 , 等 . 基于非下采样双树复轮廓波变换和稀疏表示的红外和可见光图像融合 [J]. 光学 精密工程 , 2016 , 24 ( 7 ): 1763 - 1771 .
YIN M , DUAN P H , CHU B , et al . Fusion of infrared and visible images combined with NSDTCT and sparse representation [J]. Optics and Precision Engineering , 2016 , 24 ( 7 ): 1763 - 1771 . (in Chinese)
ALI S S , RIAZ M M , GHAFOOR A . Fuzzy logic and additive wavelet-based panchromatic sharpening [J]. IEEE Geoscience and Remote Sensing Letters , 2014 , 11 ( 1 ): 357 - 360 .
CHEN G , LI L , JIN W Q , et al . Weighted sparse representation and gradient domain guided filter pyramid image fusion based on low-light-level dual-channel camera [J]. IEEE Photonics Journal , 2019 , 11 ( 5 ): 1 - 15 .
CHOI M , KIM R Y , NAM M R , et al . Fusion of multispectral and panchromatic Satellite images using the curvelet transform [J]. IEEE Geoscience and Remote Sensing Letters , 2005 , 2 ( 2 ): 136 - 140 .
HUANG N E , SHEN Z , LONG S R , et al . The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis [J]. Proceedings of the Royal Society of London Series A: Mathematical , Physical and Engineering Sciences, 1998 , 454 ( 1971 ): 903 - 995 .
YANG J K , GUO L , YU S W , et al . A new multi-focus image fusion algorithm based on BEMD and improved local energy [J]. Journal of Software , 2014 , 9 ( 9 ): 2329 - 2334 .
王慧斌 , 廖艳 , 沈洁 , 等 . 分级多尺度变换的水下偏振图像融合法 [J]. 光子学报 , 2014 , 43 ( 5 ): 192 - 198 .
WANG H B , LIAO Y , SHEN J , et al . Method of underwater polarization image fusion based on hierarchical and multi-scale transform [J]. Acta Photonica Sinica , 2014 , 43 ( 5 ): 192 - 198 . (in Chinese)
LI H , WU X J , KITTLER J . Infrared and visible image fusion using a deep learning framework [C]. 2018 24th International Conference on Pattern Recognition (ICPR) . August 20-24, 2018 , Beijing, China . IEEE , 2018 : 2705 - 2710 .
MA J Y , YU W , LIANG P W , et al . FusionGAN: a generative adversarial network for infrared and visible image fusion [J]. Information Fusion , 2019 , 48 : 11 - 26 .
朱攀 , 黄战华 . 基于二维经验模态分解和高斯模糊逻辑的红外与可见光图像融合 [J]. 光电子·激光 , 2017 , 28 ( 10 ): 1156 - 1162 .
ZHU P , HUANG ZH H . Fusion of infrared and visible images based on BEMD and GFL [J]. Journal of Optoelectronics·Laser , 2017 , 28 ( 10 ): 1156 - 1162 . (in Chinese)
Radford A , Metz L , Chintala S . Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [J]. Computer Sciencec , 2015 .
LIU Y , WANG Z F . Simultaneous image fusion and denoising with adaptive sparse representation [J]. IET Image Processing , 2015 , 9 ( 5 ): 347 - 357 .
ZHANG Y , ZHANG L J , BAI X Z , et al . Infrared and visual image fusion through infrared feature extraction and visual information preservation [J]. Infrared Physics & Technology , 2017 , 83 : 227 - 237 .
LIU Y , CHEN X , WARD R K , et al . Medical image fusion via convolutional sparsity based morphological component analysis [J]. IEEE Signal Processing Letters , 2019 , 26 ( 3 ): 485 - 489 .
AMIN-NAJI M , AGHAGOLZADEH A , EZOJI M . Ensemble of CNN for multi-focus image fusion [J]. Information Fusion , 2019 , 51 : 201 - 214 .
SHEN Y , WU Z D , WANG X P , et al . Tetrolet transform images fusion algorithm based on fuzzy operator [J]. Journal of Frontiers of Computer Science and Technology , 2015 , 9 ( 9 ): 1132 - 1138 .
XYDEAS C S , PETROVIĆ V . Objective image fusion performance measure [J]. Electronics Letters , 2000 , 36 ( 4 ): 308 .
闫莉萍 , 刘宝生 , 周东华 . 一种新的图像融合及性能评价方法 [J]. 系统工程与电子技术 , 2007 , 29 ( 4 ): 509 - 513 .
YAN L P , LIU B SH , ZHOU D H . Novel image fusion algorithm with novel performance evaluation method [J]. Systems Engineering and Electronics , 2007 , 29 ( 4 ): 509 - 513 . (in Chinese)
MA J Y , MA Y , LI C . Infrared and visible image fusion methods and applications: a survey [J]. Information Fusion , 2019 , 45 : 153 - 178 .
QU G H , ZHANG D L , YAN P F . Information measure for performance of image fusion [J]. Electronics Letters , 2002 , 38 ( 7 ): 313 - 315 .
0
浏览量
1126
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构