浏览全部资源
扫码关注微信
1.陆军工程大学 石家庄校区 电子与光学工程系, 河北 石家庄 050003
2.中国人民解放军32181部队, 河北 石家庄 050000
[ "刘先红(1977-), 男, 河北无极人, 工程师, 2006年于西安电子科技大学获得硕士学位, 主要从事图像融合、目标跟踪等方面的研究。E-mail: lxhfree@126.com" ]
收稿日期:2017-10-10,
录用日期:2017-11-13,
纸质出版日期:2018-05-25
移动端阅览
刘先红, 秦梦泽. 结合引导滤波和卷积稀疏表示的红外与可见光图像融合[J]. 光学 精密工程, 2018,26(5):1242-1253.
Xian-hong LIU, Meng-ze QIN. Infrared and visible image fusion using guided filter and convolutional sparse representation[J]. Optics and precision engineering, 2018, 26(5): 1242-1253.
刘先红, 秦梦泽. 结合引导滤波和卷积稀疏表示的红外与可见光图像融合[J]. 光学 精密工程, 2018,26(5):1242-1253. DOI: 10.3788/OPE.20182605.1242.
Xian-hong LIU, Meng-ze QIN. Infrared and visible image fusion using guided filter and convolutional sparse representation[J]. Optics and precision engineering, 2018, 26(5): 1242-1253. DOI: 10.3788/OPE.20182605.1242.
为了解决红外与可见光图像融合时信息容易相互干扰、影响融合质量的问题,将引导滤波、高斯低通滤波与非下采样方向滤波器组相结合,提出一种新的图像融合方法。利用引导滤波和高斯低通滤波,将源图像分解为低频近似部分、强边缘部分和高频细节部分,并将高频细节部分进行非下采样方向滤波,进一步得到高频方向细节部分;对低频近似部分应用基于局部区域能量的融合规则,对强边缘部分提出一种基于卷积稀疏表示的融合规则,对高频方向细节部分提出改进的脉冲耦合神经网络的融合规则,得到相应的融合部分,并通过逆变换得到最终的融合图像。对多组红外与可见光图像的实验结果表明,算法得到的融合结果的主观视觉效果和客观评价指标均优于传统的图像融合方法,其客观评价指标中的标准差、信息熵、互信息、平均梯度和空间频率相比融合效果较好的基于离散小波变换和稀疏表示的融合方法平均提高20.28%、2.24%、47.41%、5.34%、8.02%。
In order to solve the problem that the information from the source images is easy to interfere with each other which influences the quality of infrared and visible image fusion
a new image fusion method based on Guided filter
Gaussian filter and nonsubsampled directional filter bank was proposed. The low-frequency approximation components
strong edge components and high-frequency detail components were obtained by combining Guided and Gaussian filter. Then the high-frequency detail components were filtered to obtain the detail directional components with the use of nonsubsampled directional bank. The low-frequency approximation components were fused by a fusion rule based on regional energy and the strong edge components were fused by a strategy based on convolutional sparse representation. The detail directional components were fused by a rule based on improved pulse coupled neural network. Then the final fused results were obtained by using inverse transform through fusing the fused components. Experimental results show that the proposed algorithm outperforms traditional methods in terms of visual inspection and objective measures. Compared with the image fusion algorithm based on discrete wavelet transform and sparse representation
which possesses the better fusion effect in the traditional methods
the fusion quality indexes of the proposed method
such as Standard deviation(STD)
Information entropy(IE)
Mutual information(MI)
Average gradient (AG) and Spatial frequency(SF) increased by 20.28%
2.24%
47.41%
5.34%
8.02% averagely.
殷明, 段普宏, 褚标, 等.基于非下采样双树复轮廓波变换和稀疏表示的红外和可见光图像融合[J].光学 精密工程, 2016, 24(7):1763-1771.
YIN M, DUAN P H, CHU B, et al .. Fusion of infrared and visible images combined with NSDTCT and sparse representation[J]. Opt. Precision Eng., 2016, 24(7):1763-1771. (in Chinese)
张蕾, 金龙旭, 韩双丽, 等.采用非采样Contourlet变换与区域分类的红外和可见光图像融合[J].光学 精密工程, 2015, 23(3):810-818.
ZHANG L, JIN L X, HAN SH L, et al .. Fusion of infrared and visual images based on non-sampled Contourlet transform and region classification[J]. Opt. Precision Eng., 2015, 23(3):810-818. (in Chinese)
孔韦韦, 雷英杰, 雷阳, 等.基于改进型NSCT变换的灰度可见光与红外图像融合方法[J].控制与决策, 2010, 25(11):1607-1612.
KONG W W, LEI Y J, LEI Y, et al .. Fusion method for gray-scale visible light and infrared images based on improved NSCT[J]. Control and Decision, 2010, 25(11):1607-1612. (in Chinese)
李栋, 王敬东, 李鹏.基于NSCT变换和小波变换相结合的图像融合算法研究[J].光电子技术, 2011, 31(2):87-92.
LI D, WANG J D, LI P. Image fusion algorithms by combining NSCT and wavelet transform[J]. Optoelectronic Technology, 2011, 31(2):87-92. (in Chinese)
周志强, 汪渤, 李立广, 等.基于双边与高斯滤波混合分解的图像融合方法[J].系统工程与电子技术, 2016, 38(1):8-13.
ZHOU ZH Q, WANG B, LI L G, et al .. Image fusion based on a hybrid decomposition via bilateral and Gaussian filters[J]. Systems Engineering and Electronics, 2016, 38(1):8-13. (in Chinese)
ZHAO J F, ZHOU Q, CHEN Y T, et al .. Fusion of visible and infrared images using saliency analysis and detail preserving based image decomposition[J]. Infrared Physics & Technology, 2013, 56:93-99.
ZHAO J F, FENG H J, XU ZH H, et al .. Detail enhanced multi-source fusion using visual weight map extraction based on multi scale edge preserving decomposition[J]. Optics Communications, 2013, 287:45-52.
HU J W, LI SH T. The multiscale directional bilateral filter and its application to multisensor image fusion[J]. Information Fusion, 2012, 13(3):196-206.
LI SH T, KANG X D, HU J W. Image fusion with guided filtering[J]. IEEE Transactions on Image Processing, 2013, 22(7):2864-2875.
MENG X CH, LI J, SHEN H F, et al .. Pansharpening with a guided filter based on three-layer decomposition[J]. Sensors, 2016, 16(7):1068.
BENNETT E P, MASON J L, MCMILLAN L. Multispectral bilateral video fusion[J]. IEEE Transactions on Image Processing, 2007, 16(5):1185-1194.
LIU Y, WANG Z F. A practical pan-sharpening method with wavelet transform and sparse representation[C]. Proceedings of 2013 IEEE International Conference on Imaging Systems and Techniques , IEEE, 2013: 288-293.
LIU Y, LIU SH P, WANG Z F. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24:147-164.
ELAD M, YAVNEH I. A plurality of sparse representations is better than the sparsest one alone[J]. IEEE Transactions on Information Theory, 2009, 55(10):4701-4714.
刘盛鹏, 方勇.基于Contourlet变换和IPCNN的融合算法及其在可见光与红外线图像融合中的应用[J].红外与毫米波学报, 2007, 26(3):217-221.
LIU SH P, FANG Y. Infrared image fusion algorithm based on Contourlet transform and improved pulse coupled neural network[J]. Journal of Infrared and Millimeter Waves, 2007, 26(3):217-221. (in Chinese)
GENG P, WANG ZH Y, ZHANG ZH G, et al .. Image fusion by pulse couple neural network with shearlet[J]. Optical Engineering, 2012, 51(6):067005.
QU X B, YAN J W, XIAO H ZH, et al .. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain[J]. Acta Automatica Sinica, 2008, 34(12):1508-1514.
HUANG W, JING ZH L. Evaluation of focus measures in multi-focus image fusion[J]. Pattern Recognition Letters, 2007, 28(4):493-500.
李奕, 吴小俊.粒子群进化学习自适应双通道脉冲耦合神经网络图像融合方法研究[J].电子学报, 2014, 42(2):217-222.
LI Y, WU X J. A novel image fusion method using self-adaptive dual-channel pulse coupled neural networks based on PSO evolutionary learning[J]. Acta Electronica Sinica, 2014, 42(2):217-222. (in Chinese)
HE K M, SUN J, TANG X O. Guided image filtering[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(6):1397-1409.
WOHLBERG B. Efficient algorithms for convolutional sparse representations[J]. IEEE Transactions on Image Processing, 2016, 25(1):301-315.
DA CUNHA A L, ZHOU J, DO M N. The nonsubsampled contourlet transform:theory, design, and applications[J]. IEEE Transactions on Image Processing, 2006, 15(10):3089-3101.
李俊峰, 李其申, 张永, 等.非下采样方向滤波器组在遥感图像融合中的应用[J].中国图象图形学报, 2009, 14(10):2047-2053.
LI J F, LI Q SH, ZHANG Y, et al .. The non-subsampled directional filter bank and its application in remote sensing image fusion[J]. Journal of Image and Graphics, 2009, 14(10):2047-2053. (in Chinese)
BURT P J, KOLCZYNSKI R J. Enhanced image capture through fusion[C]. Proceedings of the 4 th International Conference on Computer Vision , IEEE, 1993: 173-182.
汪玉美, 陈代梅, 赵根保.基于目标提取与拉普拉斯变换的红外和可见光图像融合算法[J].激光与光电子学进展, 2017, 54(1):11002.
WANG Y M, CHEN D M, ZHAO G B. Image fusion algorithm of infrared and visible images based on target extraction and Laplace transformation[J]. Laser & Optoelectronics Progress, 2017, 54(1):11002. (in Chinese)
王昕, 吉桐伯, 刘富.结合目标提取和压缩感知的红外与可见光图像融合[J].光学 精密工程, 2016, 24(7):1743-1753.
WANG X, JI T B, LIU F. Fusion of infrared and visible images based on target segmentation and compressed sensing[J]. Opt. Precision Eng., 2016, 24(7):1743-1753. (in Chinese)
0
浏览量
522
下载量
14
CSCD
关联资源
相关文章
相关作者
相关机构