浏览全部资源
扫码关注微信
1.南昌航空大学 无损检测教育部重点实验室,江西 南昌 330063
2.中国航发沈阳黎明航空发动机有限责任公司,辽宁 沈阳 110043
[ "袁代玉(1998-),女,湖北潜江人,硕士研究生,2019年于昆明理工大学获得学士学位,主要从事红外图像处理方面的研究。E-mail: 249529247@qq.com" ]
[ "袁丽华(1970-),女,江西赣州人,博士,副教授,硕士生导师,1998年于南昌大学获得硕士学位,2009年于南京航空航天大学获得博士学位,主要从事红外无损检测技术方面的研究。E-mail: lihuayuan@nchu.edu.cn" ]
收稿日期:2022-08-09,
修回日期:2022-09-26,
纸质出版日期:2023-04-10
移动端阅览
袁代玉,袁丽华,习腾彦等.潜在低秩表示下的双判别器生成对抗网络的图像融合[J].光学精密工程,2023,31(07):1085-1095.
YUAN Daiyu,YUAN Lihua,XI Tengyan,et al.Image fusion of dual-discriminator generative adversarial network and latent low-rank representation[J].Optics and Precision Engineering,2023,31(07):1085-1095.
袁代玉,袁丽华,习腾彦等.潜在低秩表示下的双判别器生成对抗网络的图像融合[J].光学精密工程,2023,31(07):1085-1095. DOI: 10.37188/OPE.20233107.1085.
YUAN Daiyu,YUAN Lihua,XI Tengyan,et al.Image fusion of dual-discriminator generative adversarial network and latent low-rank representation[J].Optics and Precision Engineering,2023,31(07):1085-1095. DOI: 10.37188/OPE.20233107.1085.
为了改善红外与可见光图像融合的视觉效果,通过潜在低秩表示将两种不同源的图像分别分解为各自的低秩分量和去除噪声的稀疏分量,采用KL变换确定权重对稀疏分量进行加权融合得到融合稀疏图。再对双判别器的生成对抗网络重设计,借助VGG16网络提取两种源的低秩分量特征作为该网络的输入,通过生成器和判别器的博弈来生成融合低秩图。最后,将融合稀疏图与融合低秩图进行叠加获得最终的融合结果。实验结果表明,在TNO数据集上,与所列的5种先进方法相比,本文所提出的方法在熵、标准差、互信息、差异相关性总和及多尺度结构相似度5种指标上均获得最优结果,相比于次优值,5种指标分别提高了2.43%,4.68%,2.29%,2.24%,1.74%。在RoadScene数据集上只在差异相关性总和及多尺度结构相似度两种指标上取得最优,另外3种指标仅次于GTF(gradient transfer and total variation minimization)方法,但图像视觉效果明显优于GTF方法。综合主观评价和客观评价分析,本文所提方法确实能获得高质量的融合图像,与多种方法相比具有明显的优势。
To improve the visual effect of infrared and visible image fusion, images from two different sources were decomposed into low-rank images and sparse images with noise removed by latent low-rank representation. Moreover, to obtain the fusion sparse plot, the KL transformation was used to determine the weights and weighted fusion of the sparse components. The generation adversarial network of the double discriminator was redesigned, and the low-rank component characteristics of the two sources were extracted as the inputs of the network through the VGG16 network. The fusion low-rank diagram was generated using the game of generator and discriminator. Finally, the fusion sparse image and the fusion low-rank image were superimposed to obtain the final fusion result. Experimental results showed that on the TNO dataset, compared with the five listed advanced methods, the five indicators of entropy, standard deviation, mutual information, sum of difference correlation, and multi-scale structural similarity increased by 2.43%, 4.68%, 2.29%, 2.24%, and 1.74%, respectively, when using the proposed method. For the RoadScene dataset, only two metrics, namely, the sum of the difference correlation and multi-scale structural similarity, were optimal. The other three metrics were second only to the GTF method. However, the image visualization effect was significantly better than the GTF method. Based on subjective evaluation and objective evaluation analysis, the proposed method can obtain high-quality fusion images, which has obvious advantages compared with the comparison method.
沈英 , 黄春红 , 黄峰 , 等 . 红外与可见光图像融合技术的研究进展 [J]. 红外与激光工程 , 2021 , 50 ( 9 ): 152 - 169 . doi: 10.3788/IRLA20200467 http://dx.doi.org/10.3788/IRLA20200467
SHEN Y , HUANG CH H , HUANG F , et al . Research progress of infrared and visible image fusion technology [J]. Infrared and Laser Engineering , 2021 , 50 ( 9 ): 152 - 169 . (in Chinese) . doi: 10.3788/IRLA20200467 http://dx.doi.org/10.3788/IRLA20200467
MA J L . Infrared and visible image fusion based on visual saliency map and weighted least square optimization [J]. Infrared Physics & Technology , 2017 , 82 : 8 - 17 . doi: 10.1016/j.infrared.2017.02.005 http://dx.doi.org/10.1016/j.infrared.2017.02.005
MA J Y . Infrared and visible image fusion via gradient transfer and total variation minimization [J]. Information Fusion , 2016 , 31 : 100 - 109 . doi: 10.1016/j.inffus.2016.02.001 http://dx.doi.org/10.1016/j.inffus.2016.02.001
LI H , WU X J . DenseFuse: a fusion approach to infrared and visible images [J]. IEEE Transactions on Image Processing , 2019 , 28 ( 5 ): 2614 - 2623 . doi: 10.1109/tip.2018.2887342 http://dx.doi.org/10.1109/tip.2018.2887342
LI H , WU X J , DURRANI T . NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models [J]. IEEE Transactions on Instrumentation and Measurement , 2020 , 69 ( 12 ): 9645 - 9656 . doi: 10.1109/tim.2020.3005230 http://dx.doi.org/10.1109/tim.2020.3005230
LI H . RFN-Nest: an end-to-end residual fusion network for infrared and visible images [J]. Information Fusion , 2021 , 73 : 72 - 86 . doi: 10.1016/j.inffus.2021.02.023 http://dx.doi.org/10.1016/j.inffus.2021.02.023
GOODFELLOW I J , POUGET-ABADIE J , MIRZA M , et al . Generative Adversarial Networks [EB/OL]. 2014 : arXiv : 1406 . 2661 . https://arxiv.org/abs/1406.2661 https://arxiv.org/abs/1406.2661 "
MA J Y . FusionGAN: a generative adversarial network for infrared and visible image fusion [J]. Information Fusion , 2019 , 48 : 11 - 26 . doi: 10.1016/j.inffus.2018.09.004 http://dx.doi.org/10.1016/j.inffus.2018.09.004
MA J Y , XU H , JIANG J J , et al . DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion [J]. IEEE Transactions on Image Processing , 2020 , 29 : 4980 - 4995 . doi: 10.1109/tip.2020.2977573 http://dx.doi.org/10.1109/tip.2020.2977573
CHEN J H , YANG J . Robust subspace segmentation via low-rank representation [C]. Icml , 2014 . doi: 10.1109/tcyb.2013.2286106 http://dx.doi.org/10.1109/tcyb.2013.2286106
LIU G C , YAN S C . Latent Low-Rank Representation for subspace segmentation and feature extraction [C]. 2011 International Conference on Computer Vision . 613,2011 , Barcelona, Spain . IEEE , 2012 : 1615 - 1622 . doi: 10.1109/iccv.2011.6126422 http://dx.doi.org/10.1109/iccv.2011.6126422
LEE J , CHOE Y . Low rank matrix recovery via augmented Lagrange multiplier with nonconvex minimization [C]. 2016 IEEE 12th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP). 1112,2016 , Bordeaux, France. IEEE , 2016 : 1 - 5 . doi: 10.1109/ivmspw.2016.7528217 http://dx.doi.org/10.1109/ivmspw.2016.7528217
XU H , MA J Y , JIANG J J , et al . U2Fusion: a unified unsupervised image fusion network [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2022 , 44 ( 1 ): 502 - 518 . doi: 10.1109/tpami.2020.3012548 http://dx.doi.org/10.1109/tpami.2020.3012548
纪强 , 石文轩 , 田茂 , 等 . 基于KL与小波联合变换的多光谱图像压缩 [J]. 红外与激光工程 , 2016 , 45 ( 2 ): 275 - 281 . doi: 10.3788/irla201645.0228004 http://dx.doi.org/10.3788/irla201645.0228004
JI Q , SHI W X , TIAN M , et al . Multispectral image compression based on uniting KL transform and wavelet transform [J]. Infrared and Laser Engineering , 2016 , 45 ( 2 ): 275 - 281 . (in Chinese) . doi: 10.3788/irla201645.0228004 http://dx.doi.org/10.3788/irla201645.0228004
ZHANG X C , YE P , XIAO G . VIFB: a visible and infrared image fusion benchmark [C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 1419,2020 , Seattle, WA, USA. IEEE , 2020 : 468 - 478 . doi: 10.1109/cvprw50498.2020.00060 http://dx.doi.org/10.1109/cvprw50498.2020.00060
ASLANTAS V , BENDES E . A new image quality metric for image fusion: the sum of the correlations of differences [J]. AEU-International Journal of Electronics and Communications , 2015 , 69 ( 12 ): 1890 - 1896 . doi: 10.1016/j.aeue.2015.09.004 http://dx.doi.org/10.1016/j.aeue.2015.09.004
SIMONYAN K , ZISSERMAN A . Very deep convolutional networks for large-scale image recognition [EB/OL]. 2014 : arXiv : 1409 . 1556 . https://arxiv.org/abs/1409.1556 https://arxiv.org/abs/1409.1556 "
0
浏览量
975
下载量
1
CSCD
关联资源
相关文章
相关作者
相关机构