1.北方民族大学 计算机科学与工程学院, 宁夏 银川 750021
2.北方民族大学图像图形智能处理国家民委重点实验室, 宁夏 银川 750021
3.宁夏医科大学 医学信息工程学院, 宁夏 银川 750004
[ "周 涛(1977-),男,宁夏同心人,博士,教授,2010年于西北工业大学获得博士学位,主要从事医学图像分析处理、计算机辅助诊断、深度学习、模式识别等方面的研究。E-mail: zhoutaonxmu@126.com" ]
[ "张祥祥(1999-),女,河南驻马店人,硕士研究生,2019年于商丘师范学院获得学士学位,主要从事医学图像处理、计算机辅助诊断、深度学习等方面的研究。 E-mail: zxx19990503@163.com" ]
扫 描 看 全 文
周涛,张祥祥,陆惠玲等.基于LL-GG-LG Net的CT和PET医学图像融合[J].光学精密工程,2023,31(20):3050-3064.
ZHOU Tao,ZHANG Xiangxiang,LU Huiling,et al.CT and PET medical image fusion based on LL-GG-LG Net[J].Optics and Precision Engineering,2023,31(20):3050-3064.
周涛,张祥祥,陆惠玲等.基于LL-GG-LG Net的CT和PET医学图像融合[J].光学精密工程,2023,31(20):3050-3064. DOI: 10.37188/OPE.20233120.3050.
ZHOU Tao,ZHANG Xiangxiang,LU Huiling,et al.CT and PET medical image fusion based on LL-GG-LG Net[J].Optics and Precision Engineering,2023,31(20):3050-3064. DOI: 10.37188/OPE.20233120.3050.
多模态医学图像融合在医学临床应用中起着至关重要的作用,为了解决现有方法大多数侧重于局部特征的提取,对全局依赖关系的探索不足,忽略了全局和局部信息交互,导致难以有效解决周围组织与病灶区域之间的模式复杂性和强度相似性问题。该文提出面向PET和CT医学图像融合的LL-GG-LG Net模型。首先,提出了局部-局部融合模块(Local-Local Fusion Module,LL Module),该模块采用双层注意力机制更好地关注局部细节信息特征;其次,设计了全局-全局融合模块(Global-Global Fusion Module,GG Module),该模块通过在Swin Transformer中加入残差连接机制将局部信息引入全局信息中,提高了Transformer对局部信息的关注程度;然后,提出一种基于可微分神经架构搜索自适应的密集融合网络的局部-全局融合模块(Local-Global Fusion Module,LG Module),充分捕获全局关系并保留局部线索,有效解决背景和病灶区域相似度高问题;使用临床多模态肺部医学图像数据集验证模型的有效性,实验结果表明,该文方法在平均梯度,边缘强度,Q,AB/F,,空间频率,标准差,信息熵等感知图像融合质量评价指标上与其他七种方法中最优的方法相比,分别平均提高了21.5%,11%,4%,13%,9%,3%。模型能够突出病变区域信息,融合图像结构清晰且纹理细节丰富。
Multimodal medical image fusion plays a crucial role in clinical medical applications. Most of the existing methods have focused on local feature extraction, whereas global dependencies have been insufficiently explored; furthermore, interactions between global and local information have not been considered. This has led to difficulties in effectively addressing the complexity of patterns and the similarity between the surrounding tissue (background) and the lesion area (foreground) in terms of intensity. To address such issues, this paper proposes an LL-GG-LG Net model for PET and CT medical image fusion. Firstly, a Local-Local fusion (LL) module is proposed, which uses a two-level attention mechanism to better focus on local detailed information features. Next, a Global-Global fusion (GG) module is designed, which introduces local information into the global information by adding a residual connection mechanism to the Swin Transformer, thereby improving the Transformer's attention to local information. Subsequently, a Local-Global fusion (LG) module is proposed based on a differentiable architecture search adaptive dense fusion network, which fully captures global relationships and retains local cues, thereby effectively solving the problem of high similarity between background and focus areas. The model's effectiveness is validated using a clinical multimodal lung medical image dataset. The experimental results show that, compared to seven other methods, the proposed method objectively improves the perceptual image fusion quality evaluation indexes such as the average gradient (AG), edge intensity (EI), Q,AB/F,, spatial frequency (SF), standard deviation (SD) and information entropy (IE) edge retention by 21.5%, 11%, 4%, 13%, 9%, and 3%, respectively, on average. The model can highlight the information of the lesion areas. Moreover, the fused image structure is clear, and detailed texture information can be obtained.
医学图像融合深度学习注意力机制可微分架构搜索密集网
medical image fusiondeep learningattention mechanismdifferentiable architecture searchdense network
周涛, 霍兵强, 陆惠玲, 等. 融合多尺度图像的密集神经网络肺部肿瘤识别算法[J]. 光学 精密工程, 2021, 29(7): 1695-1708. doi: 10.37188/OPE.20212907.1695http://dx.doi.org/10.37188/OPE.20212907.1695
ZHOU T, HUO B Q, LU H L, et al. Lung tumor image recognition algorithm with densenet fusion multi-scale images[J]. Opt. Precision Eng., 2021, 29(7): 1695-1708.(in Chinese). doi: 10.37188/OPE.20212907.1695http://dx.doi.org/10.37188/OPE.20212907.1695
AZAM MA, KHAN KB, SALAHUDDIN S, et al. A review on multimodal medical image fusion: compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics[J]. Computers in Biology and Medicine, 2022, 144: 105253. doi: 10.1016/j.compbiomed.2022.105253http://dx.doi.org/10.1016/j.compbiomed.2022.105253
LI Y, ZHAO J, LV Z, et al. Medical image fusion method by deep learning[J]. International Journal of Cognitive Computing in Engineering, 2021, 2: 21-29. doi: 10.1016/j.ijcce.2020.12.004http://dx.doi.org/10.1016/j.ijcce.2020.12.004
POLINATI S, DHULI R. Multimodal medical image fusion using empirical wavelet decomposition and local energy maxima[J]. Optik, 2020, 205: 163947. doi: 10.1016/j.ijleo.2019.163947http://dx.doi.org/10.1016/j.ijleo.2019.163947
HILL P, AL-MUALLA M E, BULL D. Perceptual image fusion using wavelets[J]. IEEE Transactions on Image Processing, 2016, 26(3): 1076-1088. doi: 10.1109/tip.2016.2633863http://dx.doi.org/10.1109/tip.2016.2633863
LI S T, YIN H T, FANG L Y. Group-sparse representation with dictionary learning for medical image denoising and fusion[J]. IEEE Transactions on Biomedical Engineering, 2012, 59(12): 3450-3459. doi: 10.1109/tbme.2012.2217493http://dx.doi.org/10.1109/tbme.2012.2217493
LIU Y, LIU S, WANG Z. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24: 147-164. doi: 10.1016/j.inffus.2014.09.004http://dx.doi.org/10.1016/j.inffus.2014.09.004
SUN J, ZHU H, XU Z, et al. Poisson image fusion based on Markov random field fusion model[J]. Information Fusion, 2013, 14(3): 241-254. doi: 10.1016/j.inffus.2012.07.003http://dx.doi.org/10.1016/j.inffus.2012.07.003
LI H, WU X. Infrared and Visible Image Fusion Using Latent Low-Rank Representation[EB/OL]. 2018: arXiv: 1804.08992. https://arxiv.org/abs/1804.08992.pdfhttps://arxiv.org/abs/1804.08992.pdf
LI X X, GUO X P, HAN P F, et al. Laplacian redecomposition for multimodal medical image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 69(9): 6880-6890. doi: 10.1109/tim.2020.2975405http://dx.doi.org/10.1109/tim.2020.2975405
LIU Y, CHEN X, CHENG J, et al. A medical image fusion method based on convolutional neural networks[C]. 2017 20th International Conference on Information Fusion (Fusion).10-13, 2017, Xi'an, China. IEEE, 2017: 1-7. doi: 10.23919/icif.2017.8009769http://dx.doi.org/10.23919/icif.2017.8009769
TANG W, LIU Y, CHENG J, et al. Green fluorescent protein and phase contrast image fusion via detail preserving cross network[J]. IEEE Transactions on Computational Imaging, 2021, 7: 584-597. doi: 10.1109/tci.2021.3083965http://dx.doi.org/10.1109/tci.2021.3083965
LI H, WU X J. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623. doi: 10.1109/tip.2018.2887342http://dx.doi.org/10.1109/tip.2018.2887342
SONG X, WU X, LI H, et al. Res2NetFuse: a Fusion Method for Infrared and Visible Images[EB/OL]. 2021: arXiv: 2112.14540. https://arxiv.org/abs/2112.14540.pdfhttps://arxiv.org/abs/2112.14540.pdf
JUNG H, KIM Y, JANG H, et al. Unsupervised deep image fusion with structure tensor representations[J]. IEEE Transactions on Image Processing, 2020, 29: 3845-3858. doi: 10.1109/tip.2020.2966075http://dx.doi.org/10.1109/tip.2020.2966075
XU H, MA J. EMFusion: an unsupervised enhanced medical image fusion network[J]. Information Fusion, 2021, 76: 177-186. doi: 10.1016/j.inffus.2021.06.001http://dx.doi.org/10.1016/j.inffus.2021.06.001
ZHOU T, LI Q, LU H, et al. GAN review: models and medical image fusion applications[J]. Information Fusion, 2023, 91: 134-148. doi: 10.1016/j.inffus.2022.10.017http://dx.doi.org/10.1016/j.inffus.2022.10.017
GUO X P, NIE R C, CAO J D, et al. FuseGAN: learning to fuse multi-focus image via conditional generative adversarial network[J]. IEEE Transactions on Multimedia, 2019, 21(8): 1982-1996. doi: 10.1109/tmm.2019.2895292http://dx.doi.org/10.1109/tmm.2019.2895292
MA J Y, XU H, JIANG J J, et al. DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4980-4995. doi: 10.1109/tip.2020.2977573http://dx.doi.org/10.1109/tip.2020.2977573
ZHANG H, YUAN J T, TIAN X, et al. GAN-FM: infrared and visible image fusion using GAN with full-scale skip connection and dual Markovian discriminators[J]. IEEE Transactions on Computational Imaging, 2021, 7: 1134-1147. doi: 10.1109/tci.2021.3119954http://dx.doi.org/10.1109/tci.2021.3119954
FANG P F, ZHOU J M, ROY S K, et al. Attention in attention networks for person retrieval[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(9): 4626-4641.
ZHANG Z, LIN Z, XU J, et al. Bilateral attention network for RGB-D salient object detection[J]. IEEE Transactions on Image Processing, 2021, 30: 1949-1961. doi: 10.1109/tip.2021.3049959http://dx.doi.org/10.1109/tip.2021.3049959
VASWANI A, SHAZEER N, NIKI P, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30.
ZHOU T, YE X Y, LU H L, et al. Dense convolutional network and its application in medical image analysis[J]. BioMed Research International, 2022, 2022: 1-22. doi: 10.1155/2022/2384830http://dx.doi.org/10.1155/2022/2384830
SUGANUMA M, OZAY M, OKATANI T. Exploiting The Potential of Standard Convolutional Autoencoders for Image Restoration by Evolutionary Search[EB/OL]. 2018: arXiv: 1803.00370. https://arxiv.org/abs/1803.00370.pdfhttps://arxiv.org/abs/1803.00370.pdf
LIU H, SIMONYAN K, YANG Y. DARTS: Differentiable Architecture Search[EB/OL]. 2018: arXiv: 1806.09055. https://arxiv.org/abs/1806.09055.pdfhttps://arxiv.org/abs/1806.09055.pdf
0
浏览量
25
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构