1.北方民族大学 计算机科学与工程学院,宁夏 银川 750021
2.北方民族大学 图像图形智能处理国家民委重点实验室,宁夏 银川 750021
3.宁夏医科大学 理学院,宁夏 银川 750004
扫 描 看 全 文
ZHOU Tao, LIU Yuncan, HOU Senbao, et al. REC-ResNet: Feature enhancement model for COVID-19 aided diagnosis. [J]. Optics and Precision Engineering 31(14):2093-2110(2023)
ZHOU Tao, LIU Yuncan, HOU Senbao, et al. REC-ResNet: Feature enhancement model for COVID-19 aided diagnosis. [J]. Optics and Precision Engineering 31(14):2093-2110(2023) DOI: 10.37188/OPE.20233114.2093.
基于残差神经网络的新冠肺炎(Corona Virus Disease 2019, COVID-19)辅助诊断是最近的研究热点,但是COVID-19胸部X-Ray图像病变区域多样化,其大小、形状、位置因患者而异,且与周围组织的边界模糊,对比度较低,导致难以充分提取病变区域的有效特征。本文针对上述问题,提出一种COVID-19辅助诊断模型REC-ResNet,该模型以ResNet50为主干网络,引入三种特征增强策略,用来提高模型的特征提取能力。首先,采用残差自适应特征融合模块以自适应加权的方式有效地融合模型每个Stage中来自不同残差块的分层特征,该模块不仅建模不同通道之间的相关性,还学习自适应地估计不同层次信息的相对重要性;然后,在模型主干中引入高效特征增强Transformer模块,采用特征增强多头自注意力提取胸部X-Ray图像中的全局信息以增强模型的表达能力,有效地弥补了CNN捕获全局特征表示能力较弱的不足;其次,为了获得更丰富的上下文信息,提出跨层注意力增强模块,分别采用通道注意力和空间注意力对深层和浅层特征进行增强,并在充分考虑长距离特征依赖的情况下有效地融合高级语义信息和低级空间细节实现跨层注意力特征增强,使模型提取更多有效特征以进一步提高模型分类准确率。最终,在COVID-19胸部X-Ray图像数据集上的实验结果表明:本文模型与其他先进的CNN分类模型相比能够实现优异的分类性能,Acc,Pre,Rec,F1 Score和Spe指标分别为97.58%,97.60%,97.58%,97.59%和97.46%,进一步通过Grad-CAM可视化技术对模型进行解释,以增强特征的直观性。本文方法有助于临床医生做出正确的医学判断和更好的患者预后,为COVID-19的辅助诊断提供有效的帮助。
新冠肺炎胸部X-Ray图像残差神经网络注意力机制特征增强
COVID-19chest X-Ray imageresidual neural networkattention mechanismfeature enhancement
MONSHI MM, POON J, CHUNG V, et al. CovidXrayNet: Optimizing data augmentation and CNN hyperparameters for improved COVID-19 detection from CXR[J]. Computers in Biology and Medicine, 2021, 133: 104375. doi: 10.1016/j.compbiomed.2021.104375http://dx.doi.org/10.1016/j.compbiomed.2021.104375
ASHOUR AS, EISSA MM, WAHBA MA, et al. Ensemble-based bag of features for automated classification of normal and COVID-19 CXR images[J]. Biomedical Signal Processing and Control, 2021, 68: 102656. doi: 10.1016/j.bspc.2021.102656http://dx.doi.org/10.1016/j.bspc.2021.102656
ASHOUR AS, EISSA MM, WAHBA MA, et al. Covid-MANet: multi-task attention network for explainable diagnosis and severity assessment of COVID-19 from CXR images[J]. Pattern Recognition, 2022, 131: 108826. doi: 10.1016/j.patcog.2022.108826http://dx.doi.org/10.1016/j.patcog.2022.108826
周涛, 刘赟璨, 陆惠玲, 等. ResNet及其在医学图像处理领域的应用: 研究进展与挑战[J]. 电子与信息学报, 2022, 44(1): 149-167. doi: 10.11999/JEIT210914http://dx.doi.org/10.11999/JEIT210914
ZHOU T, LIU Y C, LU H L, et al. ResNet and its application to medical image processing: research progress and challenges[J]. Journal of Electronics & Information Technology, 2022, 44(1): 149-167.(in Chinese). doi: 10.11999/JEIT210914http://dx.doi.org/10.11999/JEIT210914
SHOWKAT S, QURESHI S. Efficacy of Transfer Learning-based ResNet models in Chest X-ray image classification for detecting COVID-19 Pneumonia[J]. Chemometrics and Intelligent Laboratory Systems, 2022, 224: 104534. doi: 10.1016/j.chemolab.2022.104534http://dx.doi.org/10.1016/j.chemolab.2022.104534
GHOSH SK, GHOSH A. ENResNet: a novel residual neural network for chest X-ray enhancement based COVID-19 detection[J]. Biomedical Signal Processing and Control, 2022, 72: 103286. doi: 10.1016/j.bspc.2021.103286http://dx.doi.org/10.1016/j.bspc.2021.103286
KELES A, KELES M B, KELES A. COV19-CNNet and COV19-ResNet: diagnostic inference engines for early detection of COVID-19[J]. Cognitive Computation, 2021: 1-11. doi: 10.1007/s12559-020-09795-5http://dx.doi.org/10.1007/s12559-020-09795-5
OUCHICHA C, AMMOR O, MEKNASSI M. CVDNet: a novel deep learning architecture for detection of coronavirus (Covid-19) from chest X-ray images[J]. Chaos, Solitons & Fractals, 2020, 140: 110245. doi: 10.1016/j.chaos.2020.110245http://dx.doi.org/10.1016/j.chaos.2020.110245
RAJPAL S, LAKHYANI N, SINGH AK, et al. Using handpicked features in conjunction with ResNet-50 for improved detection of COVID-19 from chest X-ray images[J]. Chaos, Solitons & Fractals, 2021, 145: 110749. doi: 10.1016/j.chaos.2021.110749http://dx.doi.org/10.1016/j.chaos.2021.110749
CELIK G. Detection of Covid-19 and other pneumonia cases from CT and X-ray chest images using deep learning based on feature reuse residual block and depthwise dilated convolutions neural network[J]. Applied Soft Computing, 2023, 133: 109906. doi: 10.1016/j.asoc.2022.109906http://dx.doi.org/10.1016/j.asoc.2022.109906
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is All You Need[EB/OL]. 2017: arXiv: 1706.03762. https://arxiv.org/abs/1706.03762https://arxiv.org/abs/1706.03762
HU J, SHEN L, ALBANIE S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011-2023. doi: 10.1109/tpami.2019.2913372http://dx.doi.org/10.1109/tpami.2019.2913372
ULYANOV D, VEDALDI A, LEMPITSKY V. Instance Normalization: the Missing Ingredient for Fast Stylization[EB/OL]. 2016: arXiv: 1607.08022. https://arxiv.org/abs/1607.08022https://arxiv.org/abs/1607.08022. doi: 10.1109/cvpr.2017.437http://dx.doi.org/10.1109/cvpr.2017.437
XU F L, ZHANG G, SONG C, et al. Multiscale and cross-level attention learning for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 1-15. doi: 10.1109/tgrs.2023.3235819http://dx.doi.org/10.1109/tgrs.2023.3235819
JIN Y H, CAI L, CHENG Z S, et al. A rapid advice guideline for the diagnosis and treatment of 2019 novel coronavirus (2019-nCoV) infected pneumonia (standard version)[J]. Military Medical Research, 2020, 7(1): 4. doi: 10.1186/s40779-020-0233-6http://dx.doi.org/10.1186/s40779-020-0233-6
KRIZHEVSKY A, SUTSKEVER I, HINTON G. ImageNet classification with deep convolutional neural networks[J]. Advances in Neural Information Processing Systems, 2012, 25(2): 25.
SIMONYAN K, ZISSERMAN A. Very Deep Convolutional Networks for Large-Scale Image Recognition[EB/OL]. 2014: arXiv: 1409.1556. https://arxiv.org/abs/1409.1556https://arxiv.org/abs/1409.1556
SZEGEDY C, LIU W, JIA Y Q, et al. Going Deeper with Convolutions[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).7-12, 2015, Boston, MA, USA. IEEE, 2015: 1-9. doi: 10.1109/cvpr.2015.7298594http://dx.doi.org/10.1109/cvpr.2015.7298594
ZHOU T, YE X Y, LU H L, et al. Dense convolutional network and its application in medical image analysis[J]. BioMed Research International, 2022: 2384830. doi: 10.1155/2022/2384830http://dx.doi.org/10.1155/2022/2384830
SANDLER M, HOWARD A, ZHU M L, et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.18-23, 2018, Salt Lake City, UT, USA. IEEE, 2018: 4510-4520. doi: 10.1109/cvpr.2018.00474http://dx.doi.org/10.1109/cvpr.2018.00474
SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the Inception Architecture for Computer Vision[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).27-30, 2016, Las Vegas, NV, USA. IEEE, 2016: 2818-2826. doi: 10.1109/cvpr.2016.308http://dx.doi.org/10.1109/cvpr.2016.308
LI X, WANG W H, HU X L, et al. Selective Kernel Networks[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).15-20, 2019, Long Beach, CA, USA. IEEE, 2020: 510-519. doi: 10.1109/cvpr.2019.00060http://dx.doi.org/10.1109/cvpr.2019.00060
WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional Block Attention Module[M]. Computer Vision - ECCV 2018. Cham: Springer International Publishing, 2018: 3-19. doi: 10.1007/978-3-030-01234-2_1http://dx.doi.org/10.1007/978-3-030-01234-2_1
WANG Q L, WU B G, ZHU P F, et al. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).13-19, 2020, Seattle, WA, USA. IEEE, 2020: 11531-11539. doi: 10.1109/cvpr42600.2020.01155http://dx.doi.org/10.1109/cvpr42600.2020.01155
SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization[C]. 2017 IEEE International Conference on Computer Vision (ICCV).22-29, 2017, Venice, Italy. IEEE, 2017: 618-626. doi: 10.1109/iccv.2017.74http://dx.doi.org/10.1109/iccv.2017.74
0
Views
24
下载量
0
CSCD
Publicity Resources
Related Articles
Related Author
Related Institution