{"defaultlang":"zh","titlegroup":{"articletitle":[{"lang":"zh","data":[{"name":"text","data":"采用DoubleUNet网络的结直肠息肉分割算法"}]},{"lang":"en","data":[{"name":"text","data":"Colorectal polyp segmentation algorithm using DoubleUNet network"}]}]},"contribgroup":{"author":[{"name":[{"lang":"zh","surname":"徐","givenname":"昌佳","namestyle":"eastern","prefix":""},{"lang":"en","surname":"XU","givenname":"Changjia","namestyle":"eastern","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":["first-author"],"bio":[{"lang":"zh","text":["徐昌佳(1996-),男,江西九江人,硕士研究生,2019年于江西理工大学获得学士学位,主要从事图像分割、医学图像处理等方面的研究。E-mail: 18720183006@163.com"],"graphic":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050333&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050354&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050335&type=","width":"22.01332855","height":"32.00399780","fontsize":""}],"data":[[{"name":"text","data":"徐昌佳"},{"name":"text","data":"(1996-),男,江西九江人,硕士研究生,2019年于江西理工大学获得学士学位,主要从事图像分割、医学图像处理等方面的研究。E-mail: "},{"name":"text","data":"18720183006@163.com"}]]}],"email":"18720183006@163.com","deceased":false},{"name":[{"lang":"zh","surname":"易","givenname":"见兵","namestyle":"eastern","prefix":""},{"lang":"en","surname":"YI","givenname":"Jianbing","namestyle":"eastern","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":["corresp"],"corresp":[{"rid":"cor1","lang":"en","text":"E-mail: yijianbing8@163.com","data":[{"name":"text","data":"E-mail: yijianbing8@163.com"}]}],"bio":[{"lang":"zh","text":["易见兵(1980-),男,江西宜春人,博士,副教授,2003年于南方冶金学院获得学士学位,2009年于江西理工大学获得硕士学位,2017年于深圳大学获得博士学位,主要从事计算机视觉、图像配准、高性能计算等方面的研究。E-mail: yijianbing8@163.com"],"graphic":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050351&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050370&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050368&type=","width":"22.01332855","height":"32.00399780","fontsize":""}],"data":[[{"name":"text","data":"易见兵"},{"name":"text","data":"(1980-),男,江西宜春人,博士,副教授,2003年于南方冶金学院获得学士学位,2009年于江西理工大学获得硕士学位,2017年于深圳大学获得博士学位,主要从事计算机视觉、图像配准、高性能计算等方面的研究。E-mail: "},{"name":"text","data":"yijianbing8@163.com"}]]}],"email":"yijianbing8@163.com","deceased":false},{"name":[{"lang":"zh","surname":"曹","givenname":"锋","namestyle":"eastern","prefix":""},{"lang":"en","surname":"CAO","givenname":"Feng","namestyle":"eastern","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":[],"deceased":false},{"name":[{"lang":"zh","surname":"方","givenname":"旺盛","namestyle":"eastern","prefix":""},{"lang":"en","surname":"FANG","givenname":"Wangsheng","namestyle":"eastern","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":[],"deceased":false}],"aff":[{"id":"aff1","intro":[{"lang":"zh","text":"江西理工大学 信息工程学院,江西 赣州 341000","data":[{"name":"text","data":"江西理工大学 信息工程学院,江西 赣州 341000"}]},{"lang":"en","text":"College of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000,China","data":[{"name":"text","data":"College of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000,China"}]}]}]},"abstracts":[{"lang":"zh","data":[{"name":"p","data":[{"name":"text","data":"由于结直肠息肉的大小、颜色和质地各异,且息肉与周围粘膜的边界不清晰,导致息肉分割存在较大挑战。为提高结直肠息肉的分割准确率,本文提出了一种改进的DoubleUNet网络分割算法。该算法首先对息肉图像进行去反光处理,并通过数据扩增方法将训练数据集进行扩大;接着,在DoubleUNet网络的解码器部分引入注意力机制,并将网络中的空洞空间卷积池化金字塔(ASPP)模块替换为密集连接空洞空间卷积池化金字塔(DenseASPP)模块,以提高网络提取特征的能力;最后,为提高小目标的分割精度,提出利用Focal Tversky Loss函数作为本算法的损失函数。该算法在Kvasir-SEG、CVC-ClinicDB、ETIS-Larib、ISIC和DSB数据集测试中的准确率分别为0.953 0、0.964 2、0.815 7、0.950 3和0.964 1,而DoubleUNet算法在上述数据集的准确率分别为0.939 4、0.959 2、0.800 7、0.945 9和0.949 6。实验结果表明本文算法相对于DoubleUNet算法具有更好的分割效果,可以有效的辅助医师切除结直肠异常组织从而降低息肉癌变的概率,且能够应用于其它医学图像分割任务中。"}]}]},{"lang":"en","data":[{"name":"p","data":[{"name":"text","data":"Colorectal polyps are different in size, color and texture, and the boundaries between the polyps and the surrounding mucosa are not clear, leading to significant challenges in polyp segmentation. In order to improve the segmentation accuracy of colorectal polyps, this paper proposes an improved DoubleUNet network segmentation algorithm. The algorithm first de-reflects the polyp image, and the training dataset is amplified by data-augmentation method; then introduces an attention mechanism in the decoder part of the DoubleUNet network, and replaces the atrous spatial pyramid pooling module of the network with a densely connected atrous spatial pyramid pooling module to improve the ability of the network to extract features; finally, in order to improve the segmentation accuracy of small targets, the Focal Tversky Loss function is proposed as the loss function of this algorithm. The accuracies of the algorithm in the Kvasir-SEG, CVC-ClinicDB, ETIS-Larib, ISIC, and DSB dataset are 0.953 0, 0.964 2, 0.815 7, 0.950 3, 0.964 1, respectively, while the accuracies of the DoubleUNet algorithm in the above datasets are 0.939 4, 0.959 2, 0.800 7, 0.945 9, 0.949 6. The experimental results show that the algorithm in this paper has a better segmentation effect than the DoubleUNet algorithm, which can effectively assist physicians to remove abnormal tissues of colorectum and thus reduce the probability of cancerous polyps and it can be applied to other medical image segmentation tasks as well."}]}]}],"keyword":[{"lang":"zh","data":[[{"name":"text","data":"图像分割"}],[{"name":"text","data":"结直肠息肉"}],[{"name":"text","data":"空洞卷积"}],[{"name":"text","data":"注意力机制"}],[{"name":"text","data":"条件随机场"}]]},{"lang":"en","data":[[{"name":"text","data":"image segmentation"}],[{"name":"text","data":"colorectal polyps"}],[{"name":"text","data":"dilated convolution"}],[{"name":"text","data":"attention mechanism"}],[{"name":"text","data":"conditional random field"}]]}],"highlights":[],"body":[{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"1 引 言"}],"level":"1","id":"s1"}},{"name":"p","data":[{"name":"text","data":"结直肠癌(Colorectal Cancer, CRC)发病率多年来位居癌症发病率的世界第三"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"1","type":"bibr","rid":"R1","data":[{"name":"text","data":"1"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。因此,如何预防结直肠癌已成为世界范围内的公共卫生问题。有研究指出,95%的结直肠癌是由结直肠息肉病变引起的,及时发现并切除结直肠息肉可大大降低结直肠癌的发病率,当前预防结直肠癌最有效的方式就是定期进行结肠镜检查并及时进行息肉切除手术。随着无痛结肠镜的出现和普及,人们对这项检查的接受度越来越高。然而,过去息肉的检测都是通过内窥镜医生人工观察判断的,很大程度上依赖于医生的经验和能力并且需要大量时间和精力,且许多肠道息肉在结肠镜检查时因医生长时间工作时视觉疲劳导致误诊或漏诊。计算机辅助检测系统可以实时地在结肠镜视频中显示息肉的位置,辅助内窥镜医师进行判断"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"2","type":"bibr","rid":"R2","data":[{"name":"text","data":"2"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",从而可以减少息肉被漏诊或误诊的概率。"}]},{"name":"p","data":[{"name":"text","data":"传统的分割方法是通过提取颜色、形状和纹理等特征,然后使用分类器将息肉与其周围非息肉区域进行区分。2014年,Mamonov等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"3","type":"bibr","rid":"R3","data":[{"name":"text","data":"3"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出依据从结肠镜视频帧中提取到的形状和纹理特征,使用二值分类器将视频帧标记为包含或不包含息肉,并假定息肉的特征是突出且大部分为圆形的,再选择合适的球半径作为分类器的决策参数。2015年,Tajbakhsh等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"4","type":"bibr","rid":"R4","data":[{"name":"text","data":"4"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出一种利用上下文信息来移除非息肉结构信息从而定位息肉的方法,该方法首先采用Canny边缘检测算法获得粗糙的边缘特征,再通过特殊的特征提取和边缘分类方法去除其中不是息肉的边缘,然后定位息肉。但是息肉的形状、大小、颜色和纹理各异,所以使用传统方法仍然有很高的漏检率,难以准确地将息肉分割出来。"}]},{"name":"p","data":[{"name":"text","data":"近年来,利用深度学习进行医学图像分割"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"5","type":"bibr","rid":"R5","data":[{"name":"text","data":"5"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"和语义分割任务"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"blockXref","data":{"data":[{"name":"xref","data":{"text":"6","type":"bibr","rid":"R6","data":[{"name":"text","data":"6"}]}},{"name":"text","data":"-"},{"name":"xref","data":{"text":"7","type":"bibr","rid":"R7","data":[{"name":"text","data":"7"}]}}],"rid":["R6","R7"],"text":"6-7","type":"bibr"}},{"name":"text","data":"]"}]},{"name":"text","data":"取得很大进展,其中基于深度学习的结直肠息肉分割方法也屡见不鲜"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"blockXref","data":{"data":[{"name":"xref","data":{"text":"8","type":"bibr","rid":"R8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"-"},{"name":"xref","data":{"text":"9","type":"bibr","rid":"R9","data":[{"name":"text","data":"9"}]}}],"rid":["R8","R9"],"text":"8-9","type":"bibr"}},{"name":"text","data":"]"}]},{"name":"text","data":"。虽然这些方法已经取得了优于传统方法的效果,但它们大多使用边界框来检测息肉,因此不能准确定位息肉的边界。为了解决这个问题,Brandao等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"10","type":"bibr","rid":"R10","data":[{"name":"text","data":"10"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"使用带有预训练模型的全卷积网络"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"11","type":"bibr","rid":"R11","data":[{"name":"text","data":"11"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"来检测和分割息肉。Ronneberger等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"12","type":"bibr","rid":"R12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出一种完全对称的编码器-解码器结构的UNet网络,受UNet网络成功应用于生物医学图像分割的启发,越来越多人使用UNet的变体结构来进行息肉分割。Zhang等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"R13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出了深度残差结构的U型网络ResUNet,将残差连接引入UNet的每一个卷积模块,可以提取到更深层次的图像特征,从而输出更精准的分割结果。Zhou等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"R14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出了UNet++,通过减少未知的网络深度,重新设计跳跃连接,并且设计了一个对网络进行剪枝的方案来提高UNet++的性能。Fan等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"15","type":"bibr","rid":"R15","data":[{"name":"text","data":"15"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出了一种平行反向注意力网络PraNet用于息肉的精确分割。Jha等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"R16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出了DoubleUNet网络,通过将两个变体UNet结构级联组成双网结构,使整个网络具有更强的特征提取能力,更大的接收域,并将squeeze-and-excite ( SE )、空间卷积池化金字塔(Atrous Spatial Pyramid Pooling, ASPP)等附加模块插入到网络中以产生边界更加清晰的息肉分割结果。"}]},{"name":"p","data":[{"name":"text","data":"与传统方法比较,利用深度学习进行息肉分割的效果有了大幅提升,但是针对实际应用场景还存在一些问题:医疗图像的获取相对比较困难,训练时的数据量偏小导致训练得到的模型过拟合,分割效果欠佳;在通过结肠镜检查拍摄息肉图片时,息肉周围的粘膜及肠道粘液会形成反光从而影响最后的分割结果"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"17","type":"bibr","rid":"R17","data":[{"name":"text","data":"17"}]}},{"name":"text","data":"]"}]},{"name":"text","data":";息肉图像背景复杂,息肉很容易受其它正常区域影响,导致因编码器提取特征能力不够,无法提取出有效的特征;部分息肉图像类别不均衡,息肉区域像素在图像中比例较小,网络训练较为困难导致出现漏检的情况。针对以上问题,本文提出了一种改进的DoubleUNet网络的结直肠息肉分割算法,主要包括以下几点工作:"}]},{"name":"p","data":[{"name":"text","data":"(1)在数据预处理和后处理阶段,首先对结直肠息肉图像进行去反光处理,消除图像反光区域对分割结果产生的影响;并通过数据扩增方法将训练数据集进行扩大,以解决本算法中训练图片数据量小的问题;最后采用条件随机场方法和测试时数据扩增的推理方式精细化最后的分割结果。"}]},{"name":"p","data":[{"name":"text","data":"(2)通过在DoubleUNet网络两个子网的解码器部分引入注意力模块,使网络在提取特征时更集中关注于息肉区域,并将低层次的信息与高层次的信息进行有效融合;将网络中的ASPP模块替换为DenseASPP模块并移除扩张率为24的空洞卷积层,以提高网络提取图像特征的能力。"}]},{"name":"p","data":[{"name":"text","data":"(3)针对本文分割的目标在图像中比例较小的问题,提出利用Focal Tversky Loss作为算法的损失函数以降低简单样本的权重,提高小目标样本的分割精度。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"2 基于DoubleUNet网络的结直肠息肉分割算法"}],"level":"1","id":"s2"}},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"2.1 DoubleUNet网络"}],"level":"2","id":"s2a"}},{"name":"p","data":[{"name":"text","data":"医学图像分割是对医学图像中感兴趣的目标部分进行像素级分类,大量基于深度学习的图像分割算法已被证明其有效性,其中基于编码器-解码器的方法,例如Unet及其变体是解决当前医疗图像分割问题的流行策略。DoubleUNet是近期提出的用于医疗图像分割任务的网络,也属于Unet网络的一种变体,它是一个双网结构,由两个U型结构相互叠加组合而成,具有两个编码器和两个解码器。如"},{"name":"xref","data":{"text":"图1","type":"fig","rid":"F1","data":[{"name":"text","data":"图1"}]}},{"name":"text","data":"所示,DoubleUNet的第一个子网以带预训练的VGG19作为编码器提取图像的特征,然后通过ASPP模块中不同扩张率的并行空洞卷积捕获该特征空间信息,最后通过解码器得到第一个子网的输出。然后将输入图像与第一个子网产生的掩模(Output1)相乘,作为第二个子网的输入产生另一个掩模(Output2),第二个子网和UNet的区别仅仅在于使用了ASPP和SE模块,所有其它组成部分保持不变,其中SE模块"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"18","type":"bibr","rid":"R18","data":[{"name":"text","data":"18"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"添加在第一个子网的编码器和第二个子网的编码器和解码器的卷积操作之后。"}]},{"name":"fig","data":{"id":"F1","caption":[{"lang":"zh","label":[{"name":"text","data":"图1"}],"title":[{"name":"text","data":"DoubleUNet算法网络结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig.1"}],"title":[{"name":"text","data":"Network architecture of DoubleUNet algorithm"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050362&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050367&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050374&type=","width":"75.01467133","height":"61.76433182","fontsize":""}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"2.2 本文算法的网络结构"}],"level":"2","id":"s2b"}},{"name":"p","data":[{"name":"text","data":"本文提出了一种改进的DoubleUNet网络的结直肠息肉分割算法,该算法的网络结构如"},{"name":"xref","data":{"text":"图2","type":"fig","rid":"F2","data":[{"name":"text","data":"图2"}]}},{"name":"text","data":"所示,算法在网络结构中引入DenseASPP模块,以密集连接的方式连接一组不同扩张率的空洞卷积,从而获得了更大范围的感受野,上述工作在没有显著增加模型大小的情况下提高了网络提取特征的能力。在子网1和子网2的解码阶段引入注意力机制,并且结合UNet结构的跳跃连接,将肠道息肉图像的浅层特征与深层特征进行特征融合,降低了噪声带来的影响,使网络在进行特征提取时更关注于病变的息肉区域,提升了结直肠息肉分割精度。"}]},{"name":"fig","data":{"id":"F2","caption":[{"lang":"zh","label":[{"name":"text","data":"图2"}],"title":[{"name":"text","data":"改进的DoubleUNet算法网络结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig.2"}],"title":[{"name":"text","data":"Network architecture of improved DoubleUNet algorithm"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050382&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050389&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050400&type=","width":"75.01467133","height":"79.37500000","fontsize":""}]}},{"name":"p","data":[{"name":"text","data":"本文提出的算法由两个级联的变体UNet结构子网络1和子网络2组成,两个子网络又分别包含编码阶段、DenseASPP和解码阶段。将经过数据预处理后的结直肠息肉数据输入子网1的编码部分(该部分是在ImageNet数据集上预训练的VGG19),该部分的架构和UNet类似;上述步骤提取到深层次结直肠息肉特征之后接入一个DenseASPP模块,该模块以密集连接一组扩张率分别为3、6、12、18的空洞卷积,从而获取多尺度信息并进行融合,在增大感受野的同时不损失信息,使网络能够提取更多目标和更小的息肉特征。解码部分由注意力机制、上采样层、卷积层和SE模块组成,其结构如"},{"name":"xref","data":{"text":"图3","type":"fig","rid":"F3","data":[{"name":"text","data":"图3"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"F3","caption":[{"lang":"zh","label":[{"name":"text","data":"图3"}],"title":[{"name":"text","data":"解码器结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig.3"}],"title":[{"name":"text","data":"Architecture of the decoder"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050406&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050411&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050391&type=","width":"75.01467133","height":"49.91099930","fontsize":""}]}},{"name":"p","data":[{"name":"text","data":"为了恢复在编码阶段丢失的特征并使网络特征提取时更关注于息肉区域,本文在上采样操作前引入注意力机制,上采样层是一个2×2双线性上采样模块,能够使输入特征映射的维数加倍,再通过跳跃连接与编码部分的特征进行连接,不仅保持了空间分辨率而且还提高了输出特征映射的质量,连接后再进行2次3×3的卷积操作,每次卷积后都进行批处理归一化且连接ReLU激活函数,之后使用SE模块显式建模通道之间的关系以增强重要特征,最后应用一个具有sigmoid激活函数的1×1卷积层生成相应的掩模。子网1的输出与原始输入逐元素相乘作为子网2的输入,子网2与子网1的区别仅仅在于编码器部分,经过级联的子网2后输出最终结直肠息肉的分割结果。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"2.3 注意力机制"}],"level":"2","id":"s2c"}},{"name":"p","data":[{"name":"text","data":"注意力机制"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"19","type":"bibr","rid":"R19","data":[{"name":"text","data":"19"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"最早在自然语言处理领域中广泛应用,随后在计算机视觉领域中进一步发展,当前在图像分类和语义分割任务中应用较多,在语义分割任务中可以对图像进行像素级预测"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"20","type":"bibr","rid":"R20","data":[{"name":"text","data":"20"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。注意力机制决定了在神经网络中哪些部分需要更多的关注则分配更大的权重,降低将每个结直肠息肉图像中的信息编码为一个固定维数向量所需要的计算成本,其结构简单,可以应用于任何大小的输入,并能提高网络特征提取性能。计算机视觉中的注意力机制包括空间注意力和通道注意力,SE模块能够显示各建模通道之间的关系,增强重要特征,抑制无用的特征。在基础网络中SE模块已经应用于第一个子网的编码器和第二个子网的编码器与解码器。为使网络在提取特征时更集中关注于息肉区域,并将低层次的信息与高层次的信息进行有效融合,减少在编码阶段连续下采样所导致的息肉信息丢失,同时抑制上采样带来的噪声影响,提升最后肠道息肉分割的准确率,本文在两个子网的解码器部分都加入了注意力模块"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"21","type":"bibr","rid":"R21","data":[{"name":"text","data":"21"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"text","data":"本文提出的注意力机制内部结构如"},{"name":"xref","data":{"text":"图4","type":"fig","rid":"F4","data":[{"name":"text","data":"图4"}]}},{"name":"text","data":"所示,图中"},{"name":"italic","data":[{"name":"text","data":"g"}]},{"name":"text","data":"表示与解码器对应的同级编码器输出的低层次信息,"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"表示上一层解码器的输出信息。"},{"name":"italic","data":[{"name":"text","data":"g"}]},{"name":"text","data":"信号首先进行批处理归一化且连接ReLU激活函数,然后经过一个3×3的卷积操作,之后连接一个大小为2×2,步长为2的最大池化层并输出"},{"name":"italic","data":[{"name":"text","data":"g"}]},{"name":"sub","data":[{"name":"text","data":"pool"}]},{"name":"text","data":"信号,以上操作的目的是将结直肠息肉的特征图缩小一半的尺寸,从而匹配"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"信号经过相同卷积操作得到的特征图大小。之后"},{"name":"italic","data":[{"name":"text","data":"g"}]},{"name":"sub","data":[{"name":"text","data":"pool"}]},{"name":"text","data":"再与"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"sub","data":[{"name":"text","data":"conv"}]},{"name":"text","data":"逐元素相加,输出的"},{"name":"italic","data":[{"name":"text","data":"gx"}]},{"name":"sub","data":[{"name":"text","data":"sum"}]},{"name":"text","data":"信号融合了浅层和深层特征。最后,"},{"name":"italic","data":[{"name":"text","data":"gx"}]},{"name":"sub","data":[{"name":"text","data":"sum"}]},{"name":"text","data":"信号再通过一个与之前相同的3×3的卷积层,得到的特征图与初始"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"信号逐元素相乘得到特征图"},{"name":"italic","data":[{"name":"text","data":"f"}]},{"name":"text","data":"。"}]},{"name":"fig","data":{"id":"F4","caption":[{"lang":"zh","label":[{"name":"text","data":"图4"}],"title":[{"name":"text","data":"注意力机制内部结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig.4"}],"title":[{"name":"text","data":"Internal architecture of attention mechanism"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050412&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050417&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050414&type=","width":"139.99633789","height":"48.93733215","fontsize":""}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"2.4 DenseASPP模块"}],"level":"2","id":"s2d"}},{"name":"p","data":[{"name":"text","data":"谷歌团队在DeepLab"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"22","type":"bibr","rid":"R22","data":[{"name":"text","data":"22"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"系列工作中结合多尺度信息和扩张卷积的特点提出了ASPP模块,该模块将不同扩张率的空洞卷积特征结合到一起。在DoubleUNet网络中引入ASPP结构连接两个子网络的编码器和解码器来获取多尺度的卷积特征,但是息肉图片背景复杂,ASPP模块在尺度轴上特征分辨率还不够密集,获取的感受野还不够大,因此本文引入DenseASPP模块"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"23","type":"bibr","rid":"R23","data":[{"name":"text","data":"23"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"来代替ASPP模块。该模块的网络结构如"},{"name":"xref","data":{"text":"图5","type":"fig","rid":"F5","data":[{"name":"text","data":"图5"}]}},{"name":"text","data":"所示,其能够以更密集的方式连接一组空洞卷积,获得更大范围的扩张率,在没有显著增加模型大小的情况下提高了网络提取特征的能力。由于使用扩张率过大的空洞卷积会导致卷积退化,造成特征提取性能的降低,因此本文移除了扩张率为24的空洞卷积层。"}]},{"name":"fig","data":{"id":"F5","caption":[{"lang":"zh","label":[{"name":"text","data":"图5"}],"title":[{"name":"text","data":"DenseASPP网络结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig.5"}],"title":[{"name":"text","data":"Network architecture of DenseASPP"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050433&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050424&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050435&type=","width":"139.99633789","height":"73.61766815","fontsize":""}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"2.5 损失函数"}],"level":"2","id":"s2e"}},{"name":"p","data":[{"name":"text","data":"在研究基于深度学习的图像分割问题时,常采用交叉熵损失函数来刻画该类问题,但在医学领域中,检测和分割目标通常只占据整个图像中的很小一部分病变区域,这种不平衡的数据可能导致训练效果不佳,而Focal Tversky Loss函数"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"24","type":"bibr","rid":"R24","data":[{"name":"text","data":"24"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"在小目标的检测中效果较好,因此本文采用Focal Tversky Loss代替交叉熵损失函数,该函数表达式如"},{"name":"xref","data":{"text":"式(1)","type":"disp-formula","rid":"DF1","data":[{"name":"text","data":"式(1)"}]}},{"name":"text","data":":"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(1)"}],"data":[{"name":"math","data":{"math":"FTL=c(1-TIc)γ","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050441&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050427&type=","width":"32.76599884","height":"7.36600018","fontsize":""}}},{"name":"text","data":","}],"id":"DF1"}},{"name":"p","data":[{"name":"text","data":"其中:"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"TIc","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050449&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050464&type=","width":"4.74133301","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"表示tversky指数,"},{"name":"italic","data":[{"name":"text","data":"γ"}]},{"name":"text","data":"的取值范围是[1,3]。"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"TIc","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050470&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050453&type=","width":"4.74133301","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"的计算公式如"},{"name":"xref","data":{"text":"式(2)","type":"disp-formula","rid":"DF2","data":[{"name":"text","data":"式(2)"}]}},{"name":"text","data":":"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(2)"}],"data":[{"name":"math","data":{"math":"TIc=i=1Npicgic+εi=1Npicgic+αi=1Npic_gic_+βi=1Npicgic_+ε","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050477&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050474&type=","width":"59.52066422","height":"23.19866753","fontsize":""}}},{"name":"text","data":","}],"id":"DF2"}},{"name":"p","data":[{"name":"text","data":"其中:"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"pic","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050485&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050480&type=","width":"3.38666677","height":"4.57200003","fontsize":""}}}]},{"name":"text","data":"为像素"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"i","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050500&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050521&type=","width":"1.10066664","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"属于病变类别"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"c","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050511&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050494&type=","width":"1.52400005","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"的概率,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"pic_","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050498&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050515&type=","width":"3.97933316","height":"4.23333359","fontsize":""}}}]},{"name":"text","data":"为像素"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"i","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050500&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050521&type=","width":"1.10066664","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"属于非病变类别"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"c_","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050536&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050533&type=","width":"1.86266661","height":"3.97933316","fontsize":""}}}]},{"name":"text","data":"的概率,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"gic","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050541&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050538&type=","width":"3.47133350","height":"4.57200003","fontsize":""}}}]},{"name":"text","data":"为标签中像素"},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"对应的值,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"gic_","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050563&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050544&type=","width":"4.06400013","height":"4.23333359","fontsize":""}}}]},{"name":"text","data":"为标签中像素"},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"对应1-"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"gic","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050568&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050566&type=","width":"3.47133350","height":"4.57200003","fontsize":""}}}]},{"name":"text","data":"的值,在二分类任务中"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"gic","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050576&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050573&type=","width":"3.47133350","height":"4.57200003","fontsize":""}}}]},{"name":"text","data":"只有0和1两种取值,0表示像素"},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"属于非病变类别"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"c¯","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050583&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050559&type=","width":"1.52400005","height":"3.47133350","fontsize":""}}}]},{"name":"text","data":",1表示像素"},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"属于病变类别"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"text","data":","},{"name":"inlineformula","data":[{"name":"math","data":{"math":"ε","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050597&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050594&type=","width":"1.43933344","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"为光滑因子。通过调节超参数"},{"name":"italic","data":[{"name":"text","data":"α"}]},{"name":"text","data":"和"},{"name":"italic","data":[{"name":"text","data":"β"}]},{"name":"text","data":",可以在类别不均衡的情况下改变权重以提高召回率,本文分别设置"},{"name":"italic","data":[{"name":"text","data":"α"}]},{"name":"text","data":"=0.7,"},{"name":"italic","data":[{"name":"text","data":"β"}]},{"name":"text","data":"=0.3,"},{"name":"italic","data":[{"name":"text","data":"γ"}]},{"name":"text","data":"=1.33。"}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"3 数据预处理及后处理"}],"level":"1","id":"s3"}},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"3.1 数据集"}],"level":"2","id":"s3a"}},{"name":"p","data":[{"name":"text","data":"为验证算法的有效性、泛化性和普适性,本文算法在五个公共数据集上进行了相关实验,第一个数据集是Kvasir-SEG数据集"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"25","type":"bibr","rid":"R25","data":[{"name":"text","data":"25"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",由挪威奥斯陆大学医院的内窥镜专家采集并标注,该数据集包含1 000张息肉图片和其对应的标签,图片像素大小为256×256;第二个数据集是CVC-ClinicDB数据集"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"26","type":"bibr","rid":"R26","data":[{"name":"text","data":"26"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",由医学图像计算与计算机辅助干预国际会议(MedicalImage Computing and Computer- Assisted Intervention,MICCAI)于2015年发布,该数据集包含31个结肠镜序列的612张图片和其对应的标签,图片像素大小为384×288;第三个数据集是ETIS-Larib数据集"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"27","type":"bibr","rid":"R27","data":[{"name":"text","data":"27"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",由MICCAI息肉自动检测子挑战赛"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"28","type":"bibr","rid":"R28","data":[{"name":"text","data":"28"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"在2017年发布,该数据集包含196张从结肠镜视频中提取的息肉图片和其对应的标签,图片像素大小为1 255×966;第四个数据集ISIC数据集"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"29","type":"bibr","rid":"R29","data":[{"name":"text","data":"29"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"是由国际皮肤成像协作组织(International Skin Imaging Collaboration,ISIC)提供的皮肤镜图像数据集,该数据集包括2 594张图片,图片有多种不同尺寸;第五个数据集是DSB数据集,由数据科学碗(Data Science Bowl)挑战赛在2018年发布,该数据集包含670张细胞核图片及其对应的标签,图片有多种不同尺寸。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"3.2 数据预处理"}],"level":"2","id":"s3b"}},{"name":"p","data":[{"name":"text","data":"在进行结肠镜检查拍摄息肉图片的过程中,由于光源被反射,息肉周围的粘膜以及肠道的黏液处在图片中会显示出高光,这些图像特征会对感知图像质量产生负面影响"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"30","type":"bibr","rid":"R30","data":[{"name":"text","data":"30"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。此外,对于进行图像分割任务的算法来说,息肉表面的高光会影响从息肉表面获得的纹理特征,严重干扰算法的有效性。为降低图片高光区域对算法的影响,本文在数据预处理阶段对原息肉图像进行去反光处理。首先对图片进行反光检测:第一步使用颜色平衡自适应阈值来确定图片中很明显的高光区域,若某部分显示强度过强,则属于高光。然而颜色通道可能因颜色平衡而产生强度偏移,同时高光的实际强度可能高于所有三个颜色通道的饱和点,因此将图像的绿色和蓝色通道"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"sub","data":[{"name":"text","data":"G"}]},{"name":"text","data":"和"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"sub","data":[{"name":"text","data":"B"}]},{"name":"text","data":"进行归一化,计算灰度强度"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"sub","data":[{"name":"text","data":"E"}]},{"name":"text","data":",具体为"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"sub","data":[{"name":"text","data":"E"}]},{"name":"text","data":"=0.2989·"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"sub","data":[{"name":"text","data":"R"}]},{"name":"text","data":"+0.5870·"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"sub","data":[{"name":"text","data":"G"}]},{"name":"text","data":"+0.1140·"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"sub","data":[{"name":"text","data":"B"}]},{"name":"text","data":",其中"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"sub","data":[{"name":"text","data":"R"}]},{"name":"text","data":"为红色通道。之所以使用灰度强度作为参考,而不是主要的红色通道,是因为在结肠镜图像中,接近饱和的红色通道强度不仅出现在反光区域,图像大部分区域都显示为较强的红色。按如下方法计算颜色平衡比:"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(3)"}],"data":[{"name":"math","data":{"math":"rGE=P95(cG)P95(cE),rBE=P95(cB)P95(cE),","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050608&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050599&type=","width":"23.87599945","height":"20.65866661","fontsize":""}}},{"name":"text","data":" "}],"id":"DF3"}},{"name":"p","data":[{"name":"text","data":"其中,"},{"name":"italic","data":[{"name":"text","data":"P"}]},{"name":"sub","data":[{"name":"text","data":"95"}]},{"name":"text","data":"(·)表示颜色强度值超过95%的数值。当息肉图像中像素"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"x0","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050606&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050603&type=","width":"3.21733332","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"满足以下条件时,则被标记为高光:"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(4)"}],"data":[{"name":"math","data":{"math":"cG(x0)>rGET1cB(x0)>rBET1cE(x0)>T1","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050620&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050617&type=","width":"71.45866394","height":"4.48733330","fontsize":""}}},{"name":"text","data":","}],"id":"DF4"}},{"name":"p","data":[{"name":"text","data":"其中,"},{"name":"italic","data":[{"name":"text","data":"T"}]},{"name":"sub","data":[{"name":"text","data":"1"}]},{"name":"italic","data":[{"name":"text","data":"="}]},{"name":"text","data":"240,表示灰度阈值。第二步对这些高光区域用其周围一圈半径为2和4像素的圆形区域像素平均值进行填充,得到填充图像;第三步对填充后的图像做中值滤波(中值滤波器窗口大小"},{"name":"italic","data":[{"name":"text","data":"w"}]},{"name":"text","data":"=30),得到“平滑非反光区域颜色像素”,中值滤波后的图像称为“平滑非反光区域颜色图像”,将图片中的每个像素与之进行比较,得到反光区域。确定图片中被判定为反光的区域后,立即对其进行反光修复,即用反光检测中的填充方法将反光区域进行填充,然后对填充图片进行高斯模糊(高斯核"},{"name":"italic","data":[{"name":"text","data":"σ"}]},{"name":"text","data":"=8)得到一副非反光强平滑的图像,最后结合原图和高斯模糊的图像进行修复得到去反光图像,如"},{"name":"xref","data":{"text":"图6","type":"fig","rid":"F6","data":[{"name":"text","data":"图6"}]}},{"name":"text","data":"所示去反光前后图像。由于医学数据集的获取和标注比较困难,现有数据集包含的样本数较少,这使得在该数据集上训练出来的模型容易过拟合且效果欠佳。针对训练样本较少的问题,本文提出利用数据扩增的方法来增加样本数量。首先分别将Kvasir-SEG、CVC-ClinicDB、ISIC和DSB四个数据集进行训练集、验证集、测试集的划分,在数据集中的所有数据中随机选取80%的数据作为训练集,10%的数据作为验证集,10%的数据作为测试集,ETIS-Larib数据集仅作为测试集不参与数据集划分;接着对训练集数据采用数据扩增的方法增加样本数量,包括中心裁剪、随机旋转、高斯模糊、弹性变换和RGB平移等,单张图片可以扩增成26张不同的图片,Kvasir-SEG、CVC-ClinicDB、ISIC和DSB四个数据集的训练集扩增后样本数依次为20 800、12 740、53 976和13 936张图片;最后对数据集中的所有图片,包括扩增图片进行尺寸调整,将ETIS-Larib数据集中的图片调整为384×288,ISIC和DSB数据集中的图片调整为256×256,Kvasir-SEG和CVC-ClinicDB数据集中的图片保持不变。"}]},{"name":"fig","data":{"id":"F6","caption":[{"lang":"zh","label":[{"name":"text","data":"图6"}],"title":[{"name":"text","data":"去反光前后图像"}]},{"lang":"en","label":[{"name":"text","data":"Fig.6"}],"title":[{"name":"text","data":"Image before and after de-reflective"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050628&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050635&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050632&type=","width":"75.01467133","height":"36.74533463","fontsize":""}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"3.3 后处理"}],"level":"2","id":"s3c"}},{"name":"p","data":[{"name":"text","data":"经过分割模型预测得到的分割息肉图片经常会包含一些噪声,比如边缘不够光滑和病变息肉区域不连通等问题。本文采取了两种后处理方法:一是条件随机场(CRF)模型方法"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"31","type":"bibr","rid":"R31","data":[{"name":"text","data":"31"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",条件随机场为基于概率的无向图模型,常用于像素级的图像分割中,具有相似位置和颜色特征的两个像素,其大概率被赋予相同类别标签,其被分割的可能性小,对应了条件随机场中的概率模型。目标图像的像素为图的顶点,顶点作为状态特征,将图中顶点进行连接作为边,边作为转移特征,求解像素标签时考虑图像中其余像素对该像素的影响,精细化分割和标记,使分割结果在边缘处更加准确平滑"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"32","type":"bibr","rid":"R32","data":[{"name":"text","data":"32"}]}},{"name":"text","data":"]"}]},{"name":"text","data":";二是测试时数据扩增的推理预测方法,该方法在进行推理预测时首先将输入图片进行水平翻转和垂直翻转的数据扩增,再将三张图片一起进行预测,得到的中间结果进行翻转逆处理后输出最后的预测结果,最后取三个预测结果的平均值作为分割的最终结果。"}]},{"name":"p","data":[{"name":"text","data":"在"},{"name":"xref","data":{"text":"表1","type":"table","rid":"T1","data":[{"name":"text","data":"表1"}]}},{"name":"text","data":"和"},{"name":"xref","data":{"text":"表2","type":"table","rid":"T2","data":[{"name":"text","data":"表2"}]}},{"name":"text","data":"中,方法a表示完成图像预处理后的改进DoubleUNet网络方法;方法b表示同时采用方法a与条件随机场方法;方法c表示同时采用方法a与测试时数据扩增推理方法;方法d表示同时采用方法a与两种后处理方法。"},{"name":"xref","data":{"text":"表1","type":"table","rid":"T1","data":[{"name":"text","data":"表1"}]}},{"name":"text","data":"展示了在Kvasir-SEG数据集上各种方法的实验结果,可以看出采用条件随机场模型方法和测试时数据扩增的推理预测方法都能够进一步提高分割精度。"},{"name":"xref","data":{"text":"表2","type":"table","rid":"T2","data":[{"name":"text","data":"表2"}]}},{"name":"text","data":"展示了在DSB数据集上各方法的实验结果,可以看出采用条件随机场模型方法会降低在该数据集上的分割精度,这是由于该数据集的图片样本分割目标较多且分割目标间有堆叠情况,加入条件随机场方法进行后处理会将多个堆叠的分割目标连通为一个整体,从而影响分割精度;而测试时数据扩增的推理预测方法能够进一步提高分割精度。"}]},{"name":"table","data":{"id":"T1","caption":[{"lang":"zh","label":[{"name":"text","data":"表1"}],"title":[{"name":"text","data":"Kvasir-SEG数据集上各处理方法性能"}]},{"lang":"en","label":[{"name":"text","data":"Tab.1"}],"title":[{"name":"text","data":"Performance of different processing methods on the Kvasir-SEG dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Method"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Dice"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"MIoU"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Recall"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Precision"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"方法a"}]},{"align":"center","data":[{"name":"text","data":"0.913 3"}]},{"align":"center","data":[{"name":"text","data":"0.843 5"}]},{"align":"center","data":[{"name":"text","data":"0.848 0"}]},{"align":"center","data":[{"name":"text","data":"0.942 5"}]}],[{"align":"center","data":[{"name":"text","data":"方法b"}]},{"align":"center","data":[{"name":"text","data":"0.914 2"}]},{"align":"center","data":[{"name":"text","data":"0.845 9"}]},{"align":"center","data":[{"name":"text","data":"0.848 7"}]},{"align":"center","data":[{"name":"text","data":"0.945 3"}]}],[{"align":"center","data":[{"name":"text","data":"方法c"}]},{"align":"center","data":[{"name":"text","data":"0.919 6"}]},{"align":"center","data":[{"name":"text","data":"0.851 3"}]},{"align":"center","data":[{"name":"text","data":"0.850 8"}]},{"align":"center","data":[{"name":"text","data":"0.951 7"}]}],[{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"方法d"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.919 6"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.853 8"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.853 7"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.953 0"}]}]}]],"foot":[]}],"graphics":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050656&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050645&type=","width":"76.90000153","height":"25.87501526","fontsize":""}}},{"name":"table","data":{"id":"T2","caption":[{"lang":"zh","label":[{"name":"text","data":"表2"}],"title":[{"name":"text","data":"DSB数据集上各处理方法性能"}]},{"lang":"en","label":[{"name":"text","data":"Tab.2"}],"title":[{"name":"text","data":"Performance of different processing methods on the DSB dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Method"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Dice"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"MIoU"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Recall"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Precision"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"方法a"}]},{"align":"center","data":[{"name":"text","data":"0.917 1"}]},{"align":"center","data":[{"name":"text","data":"0.850 3"}]},{"align":"center","data":[{"name":"text","data":"0.661 7"}]},{"align":"center","data":[{"name":"text","data":"0.962 9"}]}],[{"align":"center","data":[{"name":"text","data":"方法b"}]},{"align":"center","data":[{"name":"text","data":"0.766 2"}]},{"align":"center","data":[{"name":"text","data":"0.807 4"}]},{"align":"center","data":[{"name":"text","data":"0.662 8"}]},{"align":"center","data":[{"name":"text","data":"0.782 5"}]}],[{"align":"center","data":[{"name":"text","data":"方法c"}]},{"align":"center","data":[{"name":"bold","data":[{"name":"text","data":"0.919 6"}]}]},{"align":"center","data":[{"name":"bold","data":[{"name":"text","data":"0.852 4"}]}]},{"align":"center","data":[{"name":"bold","data":[{"name":"text","data":"0.665 5"}]}]},{"align":"center","data":[{"name":"bold","data":[{"name":"text","data":"0.964 1"}]}]}],[{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"方法d"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"0.764 5"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"0.806 8"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"0.664 3"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"0.779 4"}]}]],"foot":[]}],"graphics":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050661&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050658&type=","width":"76.90000916","height":"25.87501526","fontsize":""}}},{"name":"p","data":[{"name":"xref","data":{"text":"图7","type":"fig","rid":"F7","data":[{"name":"text","data":"图7"}]}},{"name":"text","data":"和"},{"name":"xref","data":{"text":"图8","type":"fig","rid":"F8","data":[{"name":"text","data":"图8"}]}},{"name":"text","data":"分别为Kvasir-SEG数据集无后处理分割结果和进行后处理操作,"},{"name":"xref","data":{"text":"图9","type":"fig","rid":"F9","data":[{"name":"text","data":"图9"}]}},{"name":"text","data":"和"},{"name":"xref","data":{"text":"图10","type":"fig","rid":"F10","data":[{"name":"text","data":"图10"}]}},{"name":"text","data":"分别为DSB数据集无后处理分割结果和进行后处理操作。在"},{"name":"xref","data":{"text":"图7","type":"fig","rid":"F7","data":[{"name":"text","data":"图7"}]}},{"name":"text","data":"和"},{"name":"xref","data":{"text":"图9","type":"fig","rid":"F9","data":[{"name":"text","data":"图9"}]}},{"name":"text","data":"中,金标准表示专家标注的分割结果,输出图为无后处理结果。在"},{"name":"xref","data":{"text":"图8","type":"fig","rid":"F8","data":[{"name":"text","data":"图8"}]}},{"name":"text","data":"和"},{"name":"xref","data":{"text":"图10","type":"fig","rid":"F10","data":[{"name":"text","data":"图10"}]}},{"name":"text","data":"中,(a)图表示单独采用条件随机场后处理方法的输出结果,(b)图表示单独采用测试时数据扩增方法的输出结果,(c)图表示同时采用条件随机场和测试时数据扩增方法的输出结果。如"},{"name":"xref","data":{"text":"图8","type":"fig","rid":"F8","data":[{"name":"text","data":"图8"}]}},{"name":"text","data":"所示,加入条件随机场方法进行后处理将原始输出图中一些不连通的区域和边缘部分被错误预测的像素进行修正,精细化了分割结果;而采用测试时数据扩增的推理预测方法将原始图及其经过多种变换后的图共同输入模型进行预测,可以避免因原始图中的某些重要特征被忽略而导致错误分割,提高了算法的鲁棒性和防止过拟合的能力,但可能会略微增加模型推理的时间,降低算法的实时性。如"},{"name":"xref","data":{"text":"图10","type":"fig","rid":"F10","data":[{"name":"text","data":"图10"}]}},{"name":"text","data":"所示,在细胞核这类分割目标较多且分割目标间有堆叠情况的数据集上,加入条件随机场方法进行后处理会将多个堆叠的细胞核连通为一个整体,从而影响分割的准确性;而采用测试时数据扩增的推理预测方法依然能够提高分割精度,因此针对该数据集本文仅采用测试时数据扩增的后处理方法。"}]},{"name":"fig","data":{"id":"F7","caption":[{"lang":"zh","label":[{"name":"text","data":"图7"}],"title":[{"name":"text","data":"Kvasir-SEG无后处理分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Fig.7"}],"title":[{"name":"text","data":"Segmentation results without post-processing on Kvasir-SEG dataset"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050669&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050684&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050666&type=","width":"75.01467133","height":"36.40666580","fontsize":""}]}},{"name":"fig","data":{"id":"F8","caption":[{"lang":"zh","label":[{"name":"text","data":"图8"}],"title":[{"name":"text","data":"Kvasir-SEG数据集上进行后处理操作"}]},{"lang":"en","label":[{"name":"text","data":"Fig.8"}],"title":[{"name":"text","data":"Post-processing on Kvasir-SEG dataset"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050678&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050698&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050680&type=","width":"75.01467133","height":"32.97766495","fontsize":""}]}},{"name":"fig","data":{"id":"F9","caption":[{"lang":"zh","label":[{"name":"text","data":"图9"}],"title":[{"name":"text","data":"DSB数据集无后处理分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Fig.9"}],"title":[{"name":"text","data":"Segmentation results without post-processing on DSB dataset"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050702&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050696&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050703&type=","width":"75.01467133","height":"36.40666580","fontsize":""}]}},{"name":"fig","data":{"id":"F10","caption":[{"lang":"zh","label":[{"name":"text","data":"图10"}],"title":[{"name":"text","data":"DSB数据集上进行后处理操作"}]},{"lang":"en","label":[{"name":"text","data":"Fig.10"}],"title":[{"name":"text","data":"Post-processing on DSB dataset"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050708&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050711&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050710&type=","width":"75.01467133","height":"32.97766495","fontsize":""}]}}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"4 实验结果与分析"}],"level":"1","id":"s4"}},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"4.1 参数设置"}],"level":"2","id":"s4a"}},{"name":"p","data":[{"name":"text","data":"本文算法运行环境的硬件设备参数为:CPU主频为3.6 GHz,显卡为英伟达2080ti,内存为32 GHz;软件环境为:操作系统为Windows 10,深度学习框架为TensorFlow+Keras。为方便与DoubleUNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"R16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"原始算法模型性能进行对比,本文采用的实验参数基本与DoubleUNet算法相同:所有实验采用的初始学习率均为0.000 01,连续20个批次验证集损失不再上升时把学习率降低到原来的0.1倍,采用Nadam优化器,其中ISIC数据集和DSB数据集使用Adam优化器。开始训练时的批处理大小(batch size)设置为4,训练总轮次为300,当验证集精度连续50轮不再变好,则提前终止训练。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"4.2 评价指标"}],"level":"2","id":"s4b"}},{"name":"p","data":[{"name":"text","data":"本文采用四个评价指标对息肉分割的性能进行评估,分别是Dice系数(Dice)、平均交并比(MIoU)、召回率(Recall)、准确率(Precision)。"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(5)"}],"data":[{"name":"math","data":{"math":"Dice=2XYX+Y","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050723&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050728&type=","width":"27.60133362","height":"9.48266602","fontsize":""}}},{"name":"text","data":","}],"id":"DF5"}},{"name":"dispformula","data":{"label":[{"name":"text","data":"(6)"}],"data":[{"name":"math","data":{"math":"MIoU=XYX+Y-XY","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050745&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050725&type=","width":"46.82066727","height":"9.48266602","fontsize":""}}},{"name":"text","data":","}],"id":"DF6"}},{"name":"dispformula","data":{"label":[{"name":"text","data":"(7)"}],"data":[{"name":"math","data":{"math":"Recall=TP(TP+FN)","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050748&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050746&type=","width":"33.78200150","height":"8.63599968","fontsize":""}}},{"name":"text","data":","}],"id":"DF7"}},{"name":"dispformula","data":{"label":[{"name":"text","data":"(8)"}],"data":[{"name":"math","data":{"math":"Precesion=TP(TP+FP)","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050741&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050753&type=","width":"37.67666626","height":"8.63599968","fontsize":""}}},{"name":"text","data":","}],"id":"DF8"}},{"name":"p","data":[{"name":"text","data":"其中:"},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"text","data":"为预测得到的分割结果中息肉区域的像素集合,"},{"name":"italic","data":[{"name":"text","data":"Y"}]},{"name":"text","data":"为原息肉图片金标准息肉区域的像素集合,"},{"name":"italic","data":[{"name":"text","data":"TP"}]},{"name":"text","data":"为分割结果中被正确分割的像素数目,"},{"name":"italic","data":[{"name":"text","data":"FP"}]},{"name":"text","data":"为分割结果中被错误分割的背景像素数目,"},{"name":"italic","data":[{"name":"text","data":"FN"}]},{"name":"text","data":"为分割结果中被错误预测为背景的息肉像素数目。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"4.3 算法在结直肠息肉数据集上的实验"}],"level":"2","id":"s4c"}},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"4"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"3"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"1 算法中每个改进步骤的作用"}],"level":"3","id":"s4c1"}},{"name":"p","data":[{"name":"text","data":"为了验证算法中每个改进步骤的有效性,本文分别在Kvasir-SEG和CVC-ClinicDB数据集上对每个改进步骤的性能效果进行了测试。分别验证本文算法在DoubleUNet算法基础上完成改进网络结构、替换损失函数、完成图像预处理及后处理时对分割结果的影响。"},{"name":"xref","data":{"text":"表3","type":"table","rid":"T3","data":[{"name":"text","data":"表3"}]}},{"name":"text","data":"为在Kvasir-SEG数据集上的测试结果,"},{"name":"xref","data":{"text":"表4","type":"table","rid":"T4","data":[{"name":"text","data":"表4"}]}},{"name":"text","data":"为在CVC-ClinicDB数据集上的测试结果。在"},{"name":"xref","data":{"text":"表3","type":"table","rid":"T3","data":[{"name":"text","data":"表3"}]}},{"name":"text","data":"和4中,“改进网络”步骤是在“DoubleUNet”算法的基础上完成的相应操作;“损失函数”步骤是在“改进网络”步骤基础上完成的相应操作;“图像预处理”步骤是在“损失函数”步骤基础上完成的相应操作;本文方法即“后处理”步骤,是在“图像预处理”步骤基础上完成的后处理过程。从"},{"name":"xref","data":{"text":"表3","type":"table","rid":"T3","data":[{"name":"text","data":"表3"}]}},{"name":"text","data":"和4中可以发现,在DoubleUNet算法基础上,每个改进步骤对算法的Dice系数,平均交并比,召回率的性能都有一定程度的提升;且算法模型改进(包括“改进网络”和“损失函数”步骤)和“后处理”步骤对准确率的提升明显。"}]},{"name":"table","data":{"id":"T3","caption":[{"lang":"zh","label":[{"name":"text","data":"表3"}],"title":[{"name":"text","data":"Kvasir-SEG数据集上各改进步骤的分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Tab.3"}],"title":[{"name":"text","data":"Segmentation results of each improvd step on the Kvasir-SEG dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Method"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Dice"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"MIoU"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Recall"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Precision"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"DoubleUNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"R16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.889 0"}]},{"align":"center","data":[{"name":"text","data":"0.806 1"}]},{"align":"center","data":[{"name":"text","data":"0.813 7"}]},{"align":"center","data":[{"name":"text","data":"0.939 4"}]}],[{"align":"center","data":[{"name":"text","data":"改进网络"}]},{"align":"center","data":[{"name":"text","data":"0.898 9"}]},{"align":"center","data":[{"name":"text","data":"0.819 8"}]},{"align":"center","data":[{"name":"text","data":"0.833 9"}]},{"align":"center","data":[{"name":"text","data":"0.940 5"}]}],[{"align":"center","data":[{"name":"text","data":"损失函数"}]},{"align":"center","data":[{"name":"text","data":"0.910 0"}]},{"align":"center","data":[{"name":"text","data":"0.838 5"}]},{"align":"center","data":[{"name":"text","data":"0.842 1"}]},{"align":"center","data":[{"name":"text","data":"0.937 7"}]}],[{"align":"center","data":[{"name":"text","data":"图像预处理"}]},{"align":"center","data":[{"name":"text","data":"0.913 3"}]},{"align":"center","data":[{"name":"text","data":"0.843 5"}]},{"align":"center","data":[{"name":"text","data":"0.848 0"}]},{"align":"center","data":[{"name":"text","data":"0.942 5"}]}],[{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"本文方法"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.919 6"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.853 8"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.853 7"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.953 0"}]}]}]],"foot":[]}],"graphics":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050775&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050758&type=","width":"76.90000153","height":"31.04999542","fontsize":""}}},{"name":"table","data":{"id":"T4","caption":[{"lang":"zh","label":[{"name":"text","data":"表4"}],"title":[{"name":"text","data":"CVC-ClinicDB数据集上各改进步骤的分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Tab.4"}],"title":[{"name":"text","data":"Segmentation results of each improved step on the CVC-ClinicDB dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Method"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Dice"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"MIoU"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Recall"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Precision"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"DoubleUNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"R16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.923 9"}]},{"align":"center","data":[{"name":"text","data":"0.861 1"}]},{"align":"center","data":[{"name":"text","data":"0.845 7"}]},{"align":"center","data":[{"name":"text","data":"0.959 2"}]}],[{"align":"center","data":[{"name":"text","data":"改进网络"}]},{"align":"center","data":[{"name":"text","data":"0.943 6"}]},{"align":"center","data":[{"name":"text","data":"0.894 7"}]},{"align":"center","data":[{"name":"text","data":"0.881 7"}]},{"align":"center","data":[{"name":"text","data":"0.961 0"}]}],[{"align":"center","data":[{"name":"text","data":"损失函数"}]},{"align":"center","data":[{"name":"text","data":"0.947 6"}]},{"align":"center","data":[{"name":"text","data":"0.901 7"}]},{"align":"center","data":[{"name":"text","data":"0.892 8"}]},{"align":"center","data":[{"name":"text","data":"0.958 5"}]}],[{"align":"center","data":[{"name":"text","data":"图像预处理"}]},{"align":"center","data":[{"name":"text","data":"0.950 1"}]},{"align":"center","data":[{"name":"text","data":"0.905 6"}]},{"align":"center","data":[{"name":"text","data":"0.897 7"}]},{"align":"center","data":[{"name":"text","data":"0.959 1"}]}],[{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"本文方法"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.954 3"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.913 0"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.899 0"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.964 2"}]}]}]],"foot":[]}],"graphics":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050764&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050777&type=","width":"76.90000916","height":"32.67500305","fontsize":""}}},{"name":"p","data":[{"name":"xref","data":{"text":"图11","type":"fig","rid":"F11","data":[{"name":"text","data":"图11"}]}},{"name":"text","data":"为CVC-ClinicDB数据集和Kvasir-SEG数据集的验证集部分息肉图片的分割结果,其中第1、2行为CVC-ClinicDB数据集的分割结果,第3、4行为Kvasir-SEG数据集的分割结果。从分割结果来看,本文算法能够在背景复杂的息肉图片中提取到重要特征,并且对于图片中的小目标分割结果也十分理想,极少出现漏分割和错误分割的情况。"}]},{"name":"fig","data":{"id":"F11","caption":[{"lang":"zh","label":[{"name":"text","data":"图11"}],"title":[{"name":"text","data":"CVC-ClinicDB和Kvasir-SEG数据集上的分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Fig.11"}],"title":[{"name":"text","data":"Segmentation results on CVC-ClinicDB and Kvasir-SEG datasets"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050781&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050787&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050769&type=","width":"75.01467133","height":"99.56800079","fontsize":""}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"4"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"3"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"2 不同算法分割性能对比实验"}],"level":"3","id":"s4c2"}},{"name":"p","data":[{"name":"text","data":"为了检验本文算法在Kvasir-SEG、CVC- ClinicDB数据集上的分割性能,本文在两个数据集上分别和五个经典算法进行了对比实验,具体为UNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"11","type":"bibr","rid":"R11","data":[{"name":"text","data":"11"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、ResUNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"12","type":"bibr","rid":"R12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、ResUNet-Mod"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"12","type":"bibr","rid":"R12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、UNet++"},{"name":"sup","data":[{"name":"text","data":"[13]"}]},{"name":"text","data":"、ParNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"R14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"算法。实验结果分别如"},{"name":"xref","data":{"text":"表5","type":"table","rid":"T5","data":[{"name":"text","data":"表5"}]}},{"name":"text","data":"和"},{"name":"xref","data":{"text":"表6","type":"table","rid":"T6","data":[{"name":"text","data":"表6"}]}},{"name":"text","data":"所示。"}]},{"name":"table","data":{"id":"T5","caption":[{"lang":"zh","label":[{"name":"text","data":"表5"}],"title":[{"name":"text","data":"Kvasir-SEG数据集上不同算法分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Tab.5"}],"title":[{"name":"text","data":"Segmentation results of different algorithms on the Kvasir-SEG dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Method"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Dice"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"MIoU"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Recall"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Precision"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"UNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"12","type":"bibr","rid":"R12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.714 7"}]},{"align":"center","data":[{"name":"text","data":"0.433 4"}]},{"align":"center","data":[{"name":"text","data":"0.630 6"}]},{"align":"center","data":[{"name":"text","data":"0.922 2"}]}],[{"align":"center","data":[{"name":"text","data":"ResUNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"R13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.514 4"}]},{"align":"center","data":[{"name":"text","data":"0.436 4"}]},{"align":"center","data":[{"name":"text","data":"0.504 1"}]},{"align":"center","data":[{"name":"text","data":"0.729 2"}]}],[{"align":"center","data":[{"name":"text","data":"ResUNet-Mod"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"R13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.790 9"}]},{"align":"center","data":[{"name":"text","data":"0.428 7"}]},{"align":"center","data":[{"name":"text","data":"0.690 9"}]},{"align":"center","data":[{"name":"text","data":"0.871 3"}]}],[{"align":"center","data":[{"name":"text","data":"UNet++"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"R14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.821 0"}]},{"align":"center","data":[{"name":"text","data":"0.743 0"}]},{"align":"center","data":[{"name":"text","data":"-"}]},{"align":"center","data":[{"name":"text","data":"-"}]}],[{"align":"center","data":[{"name":"text","data":"ParNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"15","type":"bibr","rid":"R15","data":[{"name":"text","data":"15"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.898 0"}]},{"align":"center","data":[{"name":"text","data":"0.840 0"}]},{"align":"center","data":[{"name":"text","data":"-"}]},{"align":"center","data":[{"name":"text","data":"-"}]}],[{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"本文方法"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.919 6"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.853 8"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.853 7"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.953 0"}]}]}]],"foot":[]}],"graphics":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050804&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050789&type=","width":"76.90000153","height":"42.00000000","fontsize":""}}},{"name":"table","data":{"id":"T6","caption":[{"lang":"zh","label":[{"name":"text","data":"表6"}],"title":[{"name":"text","data":"CVC-ClinicDB数据集上不同算法分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Tab.6"}],"title":[{"name":"text","data":"Segmentation results of different algorithms on the CVC-ClinicDB dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Method"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Dice"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"MIoU"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Recall"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Precision"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"UNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"12","type":"bibr","rid":"R12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.641 9"}]},{"align":"center","data":[{"name":"text","data":"0.471 1"}]},{"align":"center","data":[{"name":"text","data":"0.675 6"}]},{"align":"center","data":[{"name":"text","data":"0.686 8"}]}],[{"align":"center","data":[{"name":"text","data":"ResUNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"R13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.451 0"}]},{"align":"center","data":[{"name":"text","data":"0.457 0"}]},{"align":"center","data":[{"name":"text","data":"0.577 5"}]},{"align":"center","data":[{"name":"text","data":"0.561 4"}]}],[{"align":"center","data":[{"name":"text","data":"ResUNet-Mod"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"R13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.778 8"}]},{"align":"center","data":[{"name":"text","data":"0.454 5"}]},{"align":"center","data":[{"name":"text","data":"0.668 3"}]},{"align":"center","data":[{"name":"text","data":"0.887 7"}]}],[{"align":"center","data":[{"name":"text","data":"UNet++"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"R14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.794 0"}]},{"align":"center","data":[{"name":"text","data":"0.729 0"}]},{"align":"center","data":[{"name":"text","data":"-"}]},{"align":"center","data":[{"name":"text","data":"-"}]}],[{"align":"center","data":[{"name":"text","data":"ParNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"15","type":"bibr","rid":"R15","data":[{"name":"text","data":"15"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.899 0"}]},{"align":"center","data":[{"name":"text","data":"0.849"}]},{"align":"center","data":[{"name":"text","data":"-"}]},{"align":"center","data":[{"name":"text","data":"-"}]}],[{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"本文方法"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.954 3"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.913 0"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.899 0"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.964 2"}]}]}]],"foot":[]}],"graphics":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050797&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050794&type=","width":"76.90000153","height":"42.00000000","fontsize":""}}},{"name":"p","data":[{"name":"text","data":"由"},{"name":"xref","data":{"text":"表5","type":"table","rid":"T5","data":[{"name":"text","data":"表5"}]}},{"name":"text","data":"和"},{"name":"xref","data":{"text":"表6","type":"table","rid":"T6","data":[{"name":"text","data":"表6"}]}},{"name":"text","data":"可以发现,本文算法在Kvasir-SEG数据集的测试中Dice系数、平均交并比、召回率和准确率分别为0.919 6、0.853 8、0.853 7、0.953 0;在CVC-ClinicDB数据集的测试中Dice系数、平均交并比、召回率、准确率分别为0.954 3、0.913 0、0.899 0、0.964 2;并且从表中可以看出本文算法在四个评价指标上相对于其它5个算法都有较大提高。实验结果表明本文算法的分割效果较好,准确率较高,在医学影像方面,对肠道息肉图像处理有一定的应用价值。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"4"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"3"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"3 不同算法模型的泛化性能对比实验"}],"level":"3","id":"s4c3"}},{"name":"p","data":[{"name":"text","data":"为进一步验证本文算法模型的泛化性能,本文在CVC-ClinicDB和ETIS-Larib数据集上进行了相关实验。在实验中,本文将CVC-ClinicDB数据集按9∶1分成两部分,分别作为训练集和验证集,ETIS-Larib数据集作为测试集,对网络进行训练并得到相应模型,并与FCN-VGG"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"10","type":"bibr","rid":"R10","data":[{"name":"text","data":"10"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、Mask RCNN"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"33","type":"bibr","rid":"R33","data":[{"name":"text","data":"33"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、UNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"11","type":"bibr","rid":"R11","data":[{"name":"text","data":"11"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、DoubleUNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"R16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"四种算法进行了对比。"},{"name":"xref","data":{"text":"图12","type":"fig","rid":"F12","data":[{"name":"text","data":"图12"}]}},{"name":"text","data":"为算法在ETIS-Larib数据集上的部分分割结果,从图中可以看到,本文算法在小目标样本上的分割效果较好,但在某些复杂场景下会出现小范围误分割。"}]},{"name":"fig","data":{"id":"F12","caption":[{"lang":"zh","label":[{"name":"text","data":"图12"}],"title":[{"name":"text","data":"ETIS-Larib数据集上的分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Fig.12"}],"title":[{"name":"text","data":"Segmentation results on ETIS-Larib datasets"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050800&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050808&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050806&type=","width":"75.01467133","height":"50.58833313","fontsize":""}]}},{"name":"p","data":[{"name":"xref","data":{"text":"表7","type":"table","rid":"T7","data":[{"name":"text","data":"表7"}]}},{"name":"text","data":"为在ETIS-Larib数据集上的测试结果,与其它方法的对比,从中可以看到本文算法的平均交并比、召回率和准确率依次为0.632 7、0.723 5、0.815 7高于其它算法,而在Dice系数指标上与DoubleUNet算法基本相当。以上实验表明本文算法的泛化性能较好,在未知数据集上的适应能力较强。"}]},{"name":"table","data":{"id":"T7","caption":[{"lang":"zh","label":[{"name":"text","data":"表7"}],"title":[{"name":"text","data":"ETIS-Larib数据集上不同算法分割结果对比"}]},{"lang":"en","label":[{"name":"text","data":"Tab.7"}],"title":[{"name":"text","data":"Comparison of segmentation results of different algorithms on the ETIS-Larib dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Method"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Dice"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"MIoU"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Recall"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Precision"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"FCN-VGG"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"10","type":"bibr","rid":"R10","data":[{"name":"text","data":"10"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.702 3"}]},{"align":"center","data":[{"name":"text","data":"0.542 0"}]},{"align":"center","data":[{"name":"text","data":"-"}]},{"align":"center","data":[{"name":"text","data":"-"}]}],[{"align":"center","data":[{"name":"text","data":"MaskRCNN"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"33","type":"bibr","rid":"R33","data":[{"name":"text","data":"33"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.704 2"}]},{"align":"center","data":[{"name":"text","data":"0.612 4"}]},{"align":"center","data":[{"name":"text","data":"-"}]},{"align":"center","data":[{"name":"text","data":"-"}]}],[{"align":"center","data":[{"name":"text","data":"U-Net"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"11","type":"bibr","rid":"R11","data":[{"name":"text","data":"11"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.292 0"}]},{"align":"center","data":[{"name":"text","data":"0.175 9"}]},{"align":"center","data":[{"name":"text","data":"0.593 0"}]},{"align":"center","data":[{"name":"text","data":"0.202 1"}]}],[{"align":"center","data":[{"name":"text","data":"DoubleU-Net"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"R16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"bold","data":[{"name":"text","data":"0.764 9"}]}]},{"align":"center","data":[{"name":"text","data":"0.625 5"}]},{"align":"center","data":[{"name":"text","data":"0.715 6"}]},{"align":"center","data":[{"name":"text","data":"0.800 7"}]}],[{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"本文方法"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"0.754 0"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.632 7"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.723 5"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.815 7"}]}]}]],"foot":[]}],"graphics":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050822&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050809&type=","width":"76.90001678","height":"39.00000000","fontsize":""}}}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"4.4 算法在通用医学图像中的应用"}],"level":"2","id":"s4d"}},{"name":"p","data":[{"name":"text","data":"为验证本文算法在通用医学图像分割任务上的有效性,本文在ISIC和DSB数据集上分别做了相应实验验证,并与UNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"11","type":"bibr","rid":"R11","data":[{"name":"text","data":"11"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、Multi-ResUNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"34","type":"bibr","rid":"R34","data":[{"name":"text","data":"34"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、DoubleUNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"R16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、UNet++"},{"name":"sup","data":[{"name":"text","data":"[14]"}]},{"name":"text","data":"深度学习基准模型进行了对比实验。由于ISIC和DSB数据集上的图片没有反光的影响,本文算法在上述数据集中没有进行去反光处理。DSB数据集中图片包含多个分割目标且分割目标有堆叠情况,采用条件随机场的后处理方法时会导致分割精度下降,所以该数据集只采用测试时数据扩增的预测推理方法进行后处理,而在ISIC数据集上使用了两种后处理方法。"},{"name":"xref","data":{"text":"图13","type":"fig","rid":"F13","data":[{"name":"text","data":"图13"}]}},{"name":"text","data":"和"},{"name":"xref","data":{"text":"图14","type":"fig","rid":"F14","data":[{"name":"text","data":"图14"}]}},{"name":"text","data":"分别表示在ISIC数据集和DSB数据集上的分割结果,从中可以看出本文算法在通用医学图像公开数据集的分割效果也较好。"}]},{"name":"fig","data":{"id":"F13","caption":[{"lang":"zh","label":[{"name":"text","data":"图13"}],"title":[{"name":"text","data":"ISIC数据集上的分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Fig.13"}],"title":[{"name":"text","data":"Segmentation results on ISIC datasets"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050813&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050816&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050826&type=","width":"75.01467133","height":"62.82266998","fontsize":""}]}},{"name":"fig","data":{"id":"F14","caption":[{"lang":"zh","label":[{"name":"text","data":"图14"}],"title":[{"name":"text","data":"DSB数据集上的分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Fig.14"}],"title":[{"name":"text","data":"Segmentation results on DSB datasets"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050833&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050837&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050835&type=","width":"75.01467133","height":"62.82266998","fontsize":""}]}},{"name":"p","data":[{"name":"xref","data":{"text":"表8","type":"table","rid":"T8","data":[{"name":"text","data":"表8"}]}},{"name":"text","data":"表示在ISIC数据集上本文算法与其它算法的实验结果,从表中可以看到本文算法的Dice系数、平均交并比、召回率和准确率分别为0.909 5、0.847 3、0.903 7、0.950 3,其中四个指标分别比DoubleUNet算法提高了0.013 3、0.026 1、0.025 7和0.004 4。"},{"name":"xref","data":{"text":"表9","type":"table","rid":"T9","data":[{"name":"text","data":"表9"}]}},{"name":"text","data":"表示在DSB数据集上本文算法与其它算法的实验结果,本文算法的Dice系数、平均交并比、召回率和准确率分别为0.919 6、0.852 4、0.665 5、0.964 1,本文算法在Dice系数、召回率和准确率三个指标上达到对比算法中的最佳。"}]},{"name":"table","data":{"id":"T8","caption":[{"lang":"zh","label":[{"name":"text","data":"表8"}],"title":[{"name":"text","data":"ISIC数据集上不同算法分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Tab.8"}],"title":[{"name":"text","data":"Segmentation results of different algorithms on the ISIC dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Method"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Dice"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"MIoU"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Recall"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Precision"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"U-Net"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"11","type":"bibr","rid":"R11","data":[{"name":"text","data":"11"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"-"}]},{"align":"center","data":[{"name":"text","data":"0"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"764 2"}]},{"align":"center","data":[{"name":"text","data":"-"}]},{"align":"center","data":[{"name":"text","data":"-"}]}],[{"align":"center","data":[{"name":"text","data":"Multi-ResUNet"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"34","type":"bibr","rid":"R34","data":[{"name":"text","data":"34"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"-"}]},{"align":"center","data":[{"name":"text","data":"0"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"802 9"}]},{"align":"center","data":[{"name":"text","data":"-"}]},{"align":"center","data":[{"name":"text","data":"-"}]}],[{"align":"center","data":[{"name":"text","data":"DoubleU-Net"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"R16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.896 2"}]},{"align":"center","data":[{"name":"text","data":"0.821 2"}]},{"align":"center","data":[{"name":"text","data":"0.878 0"}]},{"align":"center","data":[{"name":"text","data":"0.945 9"}]}],[{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"本文方法"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.909 5"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.847 3"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.903 7"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.950 3"}]}]}]],"foot":[]}],"graphics":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050839&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050848&type=","width":"76.89999390","height":"32.50000000","fontsize":""}}},{"name":"table","data":{"id":"T9","caption":[{"lang":"zh","label":[{"name":"text","data":"表9"}],"title":[{"name":"text","data":"DSB数据集上不同算法分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Tab.9"}],"title":[{"name":"text","data":"Segmentation results of different algorithms on the DSB dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Method"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Dice"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"MIoU"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Recall"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"Precision"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"U-Net"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"11","type":"bibr","rid":"R11","data":[{"name":"text","data":"11"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"757 3"}]},{"align":"center","data":[{"name":"text","data":"0"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"910 3"}]},{"align":"center","data":[{"name":"text","data":"-"}]},{"align":"center","data":[{"name":"text","data":"-"}]}],[{"align":"center","data":[{"name":"text","data":"UNet++"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"R14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"897 4"}]},{"align":"center","data":[{"name":"bold","data":[{"name":"text","data":"0.925 5"}]}]},{"align":"center","data":[{"name":"text","data":"-"}]},{"align":"center","data":[{"name":"text","data":"-"}]}],[{"align":"center","data":[{"name":"text","data":"DoubleU-Net"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"R16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.913 3"}]},{"align":"center","data":[{"name":"text","data":"0"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"840 7"}]},{"align":"center","data":[{"name":"text","data":"0.640 7"}]},{"align":"center","data":[{"name":"text","data":"0.949 6"}]}],[{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"本文方法"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.919 6"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"0.852 4"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.665 5"}]}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"bold","data":[{"name":"text","data":"0.964 1"}]}]}]],"foot":[]}],"graphics":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050843&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=27050841&type=","width":"76.90000153","height":"32.50000000","fontsize":""}}},{"name":"p","data":[{"name":"text","data":"从"},{"name":"xref","data":{"text":"表8","type":"table","rid":"T8","data":[{"name":"text","data":"表8"}]}},{"name":"text","data":"和"},{"name":"xref","data":{"text":"表9","type":"table","rid":"T9","data":[{"name":"text","data":"表9"}]}},{"name":"text","data":"的实验结果可以发现本文提出的算法在ISIC和DSB两个数据集上的分割精度对比其它分割算法整体上都有一定的提升。以上实验结果表明本文算法能够较好地完成通用医学图像的分割任务,能够适应较多的医学应用场景。"}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"5 结 论"}],"level":"1","id":"s5"}},{"name":"p","data":[{"name":"text","data":"本文针对结直肠息肉的大小、颜色和质地各异,息肉与周围粘膜的边界不清晰,且息肉区域像素在图像中比例较小导致出现分割准确率低等问题,提出了一种改进的DoubleUNet网络结直肠息肉分割算法。本文算法在Kvasir-SEG和CVC-ClinicDB数据集的测试中,Dice系数、平均交并比、召回率和准确率相对于其它经典算法都有较大提升,其中在CVC-ClinicDB数据集上的实验结果相较于基准网络,四个评价指标分别提升了0.030 4、0.051 9、0.053 3、0.005 0。表明本文算法在结直肠息肉图像上分割精度较高,能够辅助医生对结直肠息肉进行诊断,减少临床时的漏诊和误诊,对结直肠息肉图像的处理和分析具有借鉴意义。为验证算法的泛化性和普适性,本文在ETIS-Larib、ISIC、DSB数据集上进行了相关实验,实验结果验证了本文算法在未知数据集上的适应能力较强,且算法在通用医学图像上的分割效果也较好。"}]}]}],"footnote":[],"reflist":{"title":[{"name":"text","data":"参考文献"}],"data":[{"id":"R1","label":"1","citation":[{"lang":"zh","text":[{"name":"text","data":"蔡程飞"},{"name":"text","data":", "},{"name":"text","data":"徐军"},{"name":"text","data":", "},{"name":"text","data":"梁莉"},{"name":"text","data":", "},{"name":"text","data":"等"},{"name":"text","data":". "},{"name":"text","data":"基于深度卷积网络的结直肠全扫描病理图像的多种组织分割"},{"name":"text","data":"[J]. "},{"name":"text","data":"中国生物医学工程学报"},{"name":"text","data":", "},{"name":"text","data":"2017"},{"name":"text","data":", "},{"name":"text","data":"36"},{"name":"text","data":"("},{"name":"text","data":"5"},{"name":"text","data":"): "},{"name":"text","data":"632"},{"name":"text","data":"-"},{"name":"text","data":"636"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3969/j.issn.0258-8021.2017.05.018"}],"href":"http://dx.doi.org/10.3969/j.issn.0258-8021.2017.05.018"}}],"title":"基于深度卷积网络的结直肠全扫描病理图像的多种组织分割"},{"lang":"en","text":[{"name":"text","data":"CAI C F"},{"name":"text","data":", "},{"name":"text","data":"XU J"},{"name":"text","data":", "},{"name":"text","data":"LIANG L"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"A deep convolutional networks for identifying multiple tissues from colorectal histologic image of whole slide"},{"name":"text","data":"[J]. "},{"name":"text","data":"Chinese Journal of Biomedical Engineering"},{"name":"text","data":", "},{"name":"text","data":"2017"},{"name":"text","data":", "},{"name":"text","data":"36"},{"name":"text","data":"("},{"name":"text","data":"5"},{"name":"text","data":"): "},{"name":"text","data":"632"},{"name":"text","data":"-"},{"name":"text","data":"636"},{"name":"text","data":"."},{"name":"text","data":"(in Chinese)"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3969/j.issn.0258-8021.2017.05.018"}],"href":"http://dx.doi.org/10.3969/j.issn.0258-8021.2017.05.018"}}],"title":"A deep convolutional networks for identifying multiple tissues from colorectal histologic image of whole slide"}]},{"id":"R2","label":"2","citation":[{"lang":"en","text":[{"name":"text","data":"MORI Y"},{"name":"text","data":", "},{"name":"text","data":"KUDO S E"},{"name":"text","data":". "},{"name":"text","data":"Detecting colorectal polyps via machine learning"},{"name":"text","data":"[J]. "},{"name":"text","data":"Nature Biomedical Engineering"},{"name":"text","data":", "},{"name":"text","data":"2018"},{"name":"text","data":", "},{"name":"text","data":"2"},{"name":"text","data":"("},{"name":"text","data":"10"},{"name":"text","data":"): "},{"name":"text","data":"713"},{"name":"text","data":"-"},{"name":"text","data":"714"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1038/s41551-018-0308-9"}],"href":"http://dx.doi.org/10.1038/s41551-018-0308-9"}}],"title":"Detecting colorectal polyps via machine learning"}]},{"id":"R3","label":"3","citation":[{"lang":"en","text":[{"name":"text","data":"MAMONOV A V"},{"name":"text","data":", "},{"name":"text","data":"FIGUEIREDO I N"},{"name":"text","data":", "},{"name":"text","data":"FIGUEIREDO P N"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Automated polyp detection in colon capsule endoscopy"},{"name":"text","data":"[J]. "},{"name":"text","data":"IEEE Transactions on Medical Imaging"},{"name":"text","data":", "},{"name":"text","data":"2014"},{"name":"text","data":", "},{"name":"text","data":"33"},{"name":"text","data":"("},{"name":"text","data":"7"},{"name":"text","data":"): "},{"name":"text","data":"1488"},{"name":"text","data":"-"},{"name":"text","data":"1502"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/tmi.2014.2314959"}],"href":"http://dx.doi.org/10.1109/tmi.2014.2314959"}}],"title":"Automated polyp detection in colon capsule endoscopy"}]},{"id":"R4","label":"4","citation":[{"lang":"en","text":[{"name":"text","data":"TAJBAKHSH N"},{"name":"text","data":", "},{"name":"text","data":"GURUDU S R"},{"name":"text","data":", "},{"name":"text","data":"LIANG J M"},{"name":"text","data":". "},{"name":"text","data":"Automated polyp detection in colonoscopy videos using shape and context information"},{"name":"text","data":"[J]. "},{"name":"text","data":"IEEE Transactions on Medical Imaging"},{"name":"text","data":", "},{"name":"text","data":"2016"},{"name":"text","data":", "},{"name":"text","data":"35"},{"name":"text","data":"("},{"name":"text","data":"2"},{"name":"text","data":"): "},{"name":"text","data":"630"},{"name":"text","data":"-"},{"name":"text","data":"644"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/tmi.2015.2487997"}],"href":"http://dx.doi.org/10.1109/tmi.2015.2487997"}}],"title":"Automated polyp detection in colonoscopy videos using shape and context information"}]},{"id":"R5","label":"5","citation":[{"lang":"zh","text":[{"name":"text","data":"秦传波"},{"name":"text","data":", "},{"name":"text","data":"宋子玉"},{"name":"text","data":", "},{"name":"text","data":"曾军英"},{"name":"text","data":", "},{"name":"text","data":"等"},{"name":"text","data":". "},{"name":"text","data":"联合多尺度和注意力-残差的深度监督乳腺癌分割"},{"name":"text","data":"[J]. "},{"name":"text","data":"光学 精密工程"},{"name":"text","data":", "},{"name":"text","data":"2021"},{"name":"text","data":", "},{"name":"text","data":"29"},{"name":"text","data":"("},{"name":"text","data":"4"},{"name":"text","data":"): "},{"name":"text","data":"877"},{"name":"text","data":"-"},{"name":"text","data":"895"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.37188/OPE.20212904.0877"}],"href":"http://dx.doi.org/10.37188/OPE.20212904.0877"}}],"title":"联合多尺度和注意力-残差的深度监督乳腺癌分割"},{"lang":"en","text":[{"name":"text","data":"QIN C B"},{"name":"text","data":", "},{"name":"text","data":"SONG Z Y"},{"name":"text","data":", "},{"name":"text","data":"ZENG J Y"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Deeply supervised breast cancer segmentation combined with multi-scale and attention-residuals"},{"name":"text","data":"[J]. "},{"name":"text","data":"Opt. Precision Eng."},{"name":"text","data":", "},{"name":"text","data":"2021"},{"name":"text","data":", "},{"name":"text","data":"29"},{"name":"text","data":"("},{"name":"text","data":"4"},{"name":"text","data":"): "},{"name":"text","data":"877"},{"name":"text","data":"-"},{"name":"text","data":"895"},{"name":"text","data":"."},{"name":"text","data":"(in Chinese)"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.37188/OPE.20212904.0877"}],"href":"http://dx.doi.org/10.37188/OPE.20212904.0877"}}],"title":"Deeply supervised breast cancer segmentation combined with multi-scale and attention-residuals"}]},{"id":"R6","label":"6","citation":[{"lang":"zh","text":[{"name":"text","data":"刘媛媛"},{"name":"text","data":", "},{"name":"text","data":"张硕"},{"name":"text","data":", "},{"name":"text","data":"于海业"},{"name":"text","data":", "},{"name":"text","data":"等"},{"name":"text","data":". "},{"name":"text","data":"基于语义分割的复杂场景下的秸秆检测"},{"name":"text","data":"[J]. "},{"name":"text","data":"光学 精密工程"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"28"},{"name":"text","data":"("},{"name":"text","data":"1"},{"name":"text","data":"): "},{"name":"text","data":"200"},{"name":"text","data":"-"},{"name":"text","data":"211"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3788/ope.20202801.0200"}],"href":"http://dx.doi.org/10.3788/ope.20202801.0200"}}],"title":"基于语义分割的复杂场景下的秸秆检测"},{"lang":"en","text":[{"name":"text","data":"LIU Y Y"},{"name":"text","data":", "},{"name":"text","data":"ZHANG S"},{"name":"text","data":", "},{"name":"text","data":"YU H Y"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Straw detection algorithm based on semantic segmentation in complex farm scenarios"},{"name":"text","data":"[J]. "},{"name":"text","data":"Opt. Precision Eng."},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"28"},{"name":"text","data":"("},{"name":"text","data":"1"},{"name":"text","data":"): "},{"name":"text","data":"200"},{"name":"text","data":"-"},{"name":"text","data":"211"},{"name":"text","data":"."},{"name":"text","data":"(in Chinese)"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3788/ope.20202801.0200"}],"href":"http://dx.doi.org/10.3788/ope.20202801.0200"}}],"title":"Straw detection algorithm based on semantic segmentation in complex farm scenarios"}]},{"id":"R7","label":"7","citation":[{"lang":"zh","text":[{"name":"text","data":"陈彦彤"},{"name":"text","data":", "},{"name":"text","data":"李雨阳"},{"name":"text","data":", "},{"name":"text","data":"吕石立"},{"name":"text","data":", "},{"name":"text","data":"等"},{"name":"text","data":". "},{"name":"text","data":"基于深度语义分割的多源遥感图像海面溢油监测"},{"name":"text","data":"[J]. "},{"name":"text","data":"光学 精密工程"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"28"},{"name":"text","data":"("},{"name":"text","data":"5"},{"name":"text","data":"): "},{"name":"text","data":"1165"},{"name":"text","data":"-"},{"name":"text","data":"1176"},{"name":"text","data":"."}],"title":"基于深度语义分割的多源遥感图像海面溢油监测"},{"lang":"en","text":[{"name":"text","data":"CHEN Y T"},{"name":"text","data":", "},{"name":"text","data":"LI Y Y"},{"name":"text","data":", "},{"name":"text","data":"LÜ S L"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Research on oil spill monitoring of multi-source remote sensing image based on deep semantic segmentation"},{"name":"text","data":"[J]. "},{"name":"text","data":"Opt. Precision Eng."},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"28"},{"name":"text","data":"("},{"name":"text","data":"5"},{"name":"text","data":"): "},{"name":"text","data":"1165"},{"name":"text","data":"-"},{"name":"text","data":"1176"},{"name":"text","data":"."},{"name":"text","data":"(in Chinese)"}],"title":"Research on oil spill monitoring of multi-source remote sensing image based on deep semantic segmentation"}]},{"id":"R8","label":"8","citation":[{"lang":"en","text":[{"name":"text","data":"YU L Q"},{"name":"text","data":", "},{"name":"text","data":"CHEN H"},{"name":"text","data":", "},{"name":"text","data":"DOU Q"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Integrating online and offline three-dimensional deep learning for automated polyp detection in colonoscopy videos"},{"name":"text","data":"[J]. "},{"name":"text","data":"IEEE Journal of Biomedical and Health Informatics"},{"name":"text","data":", "},{"name":"text","data":"2017"},{"name":"text","data":", "},{"name":"text","data":"21"},{"name":"text","data":"("},{"name":"text","data":"1"},{"name":"text","data":"): "},{"name":"text","data":"65"},{"name":"text","data":"-"},{"name":"text","data":"75"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/jbhi.2016.2637004"}],"href":"http://dx.doi.org/10.1109/jbhi.2016.2637004"}}],"title":"Integrating online and offline three-dimensional deep learning for automated polyp detection in colonoscopy videos"}]},{"id":"R9","label":"9","citation":[{"lang":"en","text":[{"name":"text","data":"ZHANG R K"},{"name":"text","data":", "},{"name":"text","data":"ZHENG Y L"},{"name":"text","data":", "},{"name":"text","data":"POON C C Y"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Polyp detection during colonoscopy using a regression-based convolutional neural network with a tracker"},{"name":"text","data":"[J]. "},{"name":"text","data":"Pattern Recognition"},{"name":"text","data":", "},{"name":"text","data":"2018"},{"name":"text","data":", "},{"name":"text","data":"83"},{"name":"text","data":": "},{"name":"text","data":"209"},{"name":"text","data":"-"},{"name":"text","data":"219"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1016/j.patcog.2018.05.026"}],"href":"http://dx.doi.org/10.1016/j.patcog.2018.05.026"}}],"title":"Polyp detection during colonoscopy using a regression-based convolutional neural network with a tracker"}]},{"id":"R10","label":"10","citation":[{"lang":"en","text":[{"name":"text","data":"BRANDAO P"},{"name":"text","data":", "},{"name":"text","data":"MAZOMENOS E"},{"name":"text","data":", "},{"name":"text","data":"CIUTI G"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Fully convolutional neural networks for polyp segmentation in colonoscopy"},{"name":"text","data":"[C]. "},{"name":"text","data":"SPIE Medical Imaging. Proc SPIE 10134, Medical Imaging 2017"},{"name":"text","data":": "},{"name":"text","data":"Computer-Aided Diagnosis, Orlando, Florida, USA"},{"name":"text","data":". "},{"name":"text","data":"2017"},{"name":"text","data":", "},{"name":"text","data":"10134"},{"name":"text","data":": "},{"name":"text","data":"101"},{"name":"text","data":"-"},{"name":"text","data":"107"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1117/12.2254361"}],"href":"http://dx.doi.org/10.1117/12.2254361"}}],"title":"Fully convolutional neural networks for polyp segmentation in colonoscopy"}]},{"id":"R11","label":"11","citation":[{"lang":"en","text":[{"name":"text","data":"LONG J"},{"name":"text","data":", "},{"name":"text","data":"SHELHAMER E"},{"name":"text","data":", "},{"name":"text","data":"DARRELL T"},{"name":"text","data":". "},{"name":"text","data":"Fully convolutional networks for semantic segmentation"},{"name":"text","data":"[C]. "},{"name":"text","data":"2015 IEEE Conference on Computer Vision and Pattern Recognition"},{"name":"text","data":". "},{"name":"text","data":"712,2015"},{"name":"text","data":", "},{"name":"text","data":"Boston, MA, USA"},{"name":"text","data":". "},{"name":"text","data":"IEEE"},{"name":"text","data":", "},{"name":"text","data":"2015"},{"name":"text","data":": "},{"name":"text","data":"3431"},{"name":"text","data":"-"},{"name":"text","data":"3440"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/cvpr.2015.7298965"}],"href":"http://dx.doi.org/10.1109/cvpr.2015.7298965"}}],"title":"Fully convolutional networks for semantic segmentation"}]},{"id":"R12","label":"12","citation":[{"lang":"en","text":[{"name":"text","data":"RONNEBERGER O"},{"name":"text","data":", "},{"name":"text","data":"FISCHER P"},{"name":"text","data":", "},{"name":"text","data":"BROX T"},{"name":"text","data":". "},{"name":"text","data":"U-net: convolutional networks for biomedical image segmentation"},{"name":"text","data":"[C]. "},{"name":"text","data":"Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015"},{"name":"text","data":", "},{"name":"text","data":"2015"},{"name":"text","data":": "},{"name":"text","data":"234"},{"name":"text","data":"-"},{"name":"text","data":"241"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1007/978-3-319-24574-4_28"}],"href":"http://dx.doi.org/10.1007/978-3-319-24574-4_28"}}],"title":"U-net: convolutional networks for biomedical image segmentation"}]},{"id":"R13","label":"13","citation":[{"lang":"en","text":[{"name":"text","data":"ZHANG Z X"},{"name":"text","data":", "},{"name":"text","data":"LIU Q J"},{"name":"text","data":", "},{"name":"text","data":"WANG Y H"},{"name":"text","data":". "},{"name":"text","data":"Road extraction by deep residual U-net"},{"name":"text","data":"[J]. "},{"name":"text","data":"IEEE Geoscience and Remote Sensing Letters"},{"name":"text","data":", "},{"name":"text","data":"2018"},{"name":"text","data":", "},{"name":"text","data":"15"},{"name":"text","data":"("},{"name":"text","data":"5"},{"name":"text","data":"): "},{"name":"text","data":"749"},{"name":"text","data":"-"},{"name":"text","data":"753"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/lgrs.2018.2802944"}],"href":"http://dx.doi.org/10.1109/lgrs.2018.2802944"}}],"title":"Road extraction by deep residual U-net"}]},{"id":"R14","label":"14","citation":[{"lang":"en","text":[{"name":"text","data":"ZHOU Z W"},{"name":"text","data":", "},{"name":"text","data":"SIDDIQUEE M M R"},{"name":"text","data":", "},{"name":"text","data":"TAJBAKHSH N"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"UNet++: a nested U-net architecture for medical image segmentation"},{"name":"text","data":"[J]. "},{"name":"text","data":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop"},{"name":"text","data":", ML-CDS "},{"name":"text","data":"2018"},{"name":"text","data":", Held in Conjunction With MICCAI 2018, Granada, Spain, S , 2018, "},{"name":"text","data":"11045"},{"name":"text","data":": "},{"name":"text","data":"3"},{"name":"text","data":"-"},{"name":"text","data":"11"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1007/978-3-030-00889-5_1"}],"href":"http://dx.doi.org/10.1007/978-3-030-00889-5_1"}}],"title":"UNet++: a nested U-net architecture for medical image segmentation"}]},{"id":"R15","label":"15","citation":[{"lang":"en","text":[{"name":"text","data":"FAN D P"},{"name":"text","data":", "},{"name":"text","data":"JI G P"},{"name":"text","data":", "},{"name":"text","data":"ZHOU T"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"PraNet: parallel reverse attention network for polyp segmentation"},{"name":"text","data":"[C]. "},{"name":"text","data":"Medical Image Computing and Computer Assisted Intervention-MICCAI 2020"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":": "},{"name":"text","data":"263"},{"name":"text","data":"-"},{"name":"text","data":"273"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1007/978-3-030-59725-2_26"}],"href":"http://dx.doi.org/10.1007/978-3-030-59725-2_26"}}],"title":"PraNet: parallel reverse attention network for polyp segmentation"}]},{"id":"R16","label":"16","citation":[{"lang":"en","text":[{"name":"text","data":"JHA D"},{"name":"text","data":", "},{"name":"text","data":"RIEGLER M A"},{"name":"text","data":", "},{"name":"text","data":"JOHANSEN D"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"DoubleU-net: a deep convolutional neural network for medical image segmentation"},{"name":"text","data":"[C]. "},{"name":"text","data":"2020 IEEE 33rd International Symposium on Computer-Based Medical Systems"},{"name":"text","data":". "},{"name":"text","data":"2830,2020"},{"name":"text","data":", "},{"name":"text","data":"Rochester"},{"name":"text","data":", "},{"name":"text","data":"MN"},{"name":"text","data":", "},{"name":"text","data":"USA"},{"name":"text","data":". "},{"name":"text","data":"IEEE"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":": "},{"name":"text","data":"558"},{"name":"text","data":"-"},{"name":"text","data":"564"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/cbms49503.2020.00111"}],"href":"http://dx.doi.org/10.1109/cbms49503.2020.00111"}}],"title":"DoubleU-net: a deep convolutional neural network for medical image segmentation"}]},{"id":"R17","label":"17","citation":[{"lang":"zh","text":[{"name":"text","data":"王亚刚"},{"name":"text","data":", "},{"name":"text","data":"郗怡媛"},{"name":"text","data":", "},{"name":"text","data":"潘晓英"},{"name":"text","data":". "},{"name":"text","data":"改进DeepLabv3+网络的肠道息肉分割方法"},{"name":"text","data":"[J]. "},{"name":"text","data":"计算机科学与探索"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"14"},{"name":"text","data":"("},{"name":"text","data":"7"},{"name":"text","data":"): "},{"name":"text","data":"1243"},{"name":"text","data":"-"},{"name":"text","data":"1250"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3778/j.issn.1673-9418.1907053"}],"href":"http://dx.doi.org/10.3778/j.issn.1673-9418.1907053"}}],"title":"改进DeepLabv3+网络的肠道息肉分割方法"},{"lang":"en","text":[{"name":"text","data":"WANG Y G"},{"name":"text","data":", "},{"name":"text","data":"XI Y Y"},{"name":"text","data":", "},{"name":"text","data":"PAN X Y"},{"name":"text","data":". "},{"name":"text","data":"Method for intestinal polyp segmentation by improving DeepLabv3+ network"},{"name":"text","data":"[J]. "},{"name":"text","data":"Journal of Frontiers of Computer Science and Technology"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"14"},{"name":"text","data":"("},{"name":"text","data":"7"},{"name":"text","data":"): "},{"name":"text","data":"1243"},{"name":"text","data":"-"},{"name":"text","data":"1250"},{"name":"text","data":"."},{"name":"text","data":"(in Chinese)"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3778/j.issn.1673-9418.1907053"}],"href":"http://dx.doi.org/10.3778/j.issn.1673-9418.1907053"}}],"title":"Method for intestinal polyp segmentation by improving DeepLabv3+ network"}]},{"id":"R18","label":"18","citation":[{"lang":"en","text":[{"name":"text","data":"HU J"},{"name":"text","data":", "},{"name":"text","data":"SHEN L"},{"name":"text","data":", "},{"name":"text","data":"SUN G"},{"name":"text","data":". "},{"name":"text","data":"Squeeze-and-excitation networks"},{"name":"text","data":"[C]. "},{"name":"text","data":"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition"},{"name":"text","data":". "},{"name":"text","data":"1823,2018"},{"name":"text","data":", "},{"name":"text","data":"Salt Lake City, UT, USA"},{"name":"text","data":". "},{"name":"text","data":"IEEE"},{"name":"text","data":", "},{"name":"text","data":"2018"},{"name":"text","data":": "},{"name":"text","data":"7132"},{"name":"text","data":"-"},{"name":"text","data":"7141"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/cvpr.2018.00745"}],"href":"http://dx.doi.org/10.1109/cvpr.2018.00745"}}],"title":"Squeeze-and-excitation networks"}]},{"id":"R19","label":"19","citation":[{"lang":"en","text":[{"name":"text","data":"VASWANI A"},{"name":"text","data":", "},{"name":"text","data":"SHAZEER N"},{"name":"text","data":", "},{"name":"text","data":"PARMAR N"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Attention is all you need"},{"name":"text","data":"[J]. "},{"name":"text","data":"Advances in neural information processing systems"},{"name":"text","data":", "},{"name":"text","data":"2017"},{"name":"text","data":": "},{"name":"text","data":"5998"},{"name":"text","data":"-"},{"name":"text","data":"6008"},{"name":"text","data":"."}],"title":"Attention is all you need"}]},{"id":"R20","label":"20","citation":[{"lang":"en","text":[{"name":"text","data":"LI H C"},{"name":"text","data":", "},{"name":"text","data":"XIONG P F"},{"name":"text","data":", "},{"name":"text","data":"AN J"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Pyramid attention network for semantic segmentation"},{"name":"text","data":"[J]. "},{"name":"text","data":"arXiv preprint"},{"name":"text","data":" arXiv:"},{"name":"text","data":"1805.10180"},{"name":"text","data":", "},{"name":"text","data":"2018"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.23919/chicc.2019.8865946"}],"href":"http://dx.doi.org/10.23919/chicc.2019.8865946"}}],"title":"Pyramid attention network for semantic segmentation"}]},{"id":"R21","label":"21","citation":[{"lang":"en","text":[{"name":"text","data":"OKTAY O"},{"name":"text","data":", "},{"name":"text","data":"SCHLEMPER J"},{"name":"text","data":", "},{"name":"text","data":"FOLGOC L L"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Attention U-net: learning where to look for the pancreas"},{"name":"text","data":"[EB/OL]. "},{"name":"text","data":"2018: arXiv"},{"name":"text","data":": "},{"name":"text","data":"1804"},{"name":"text","data":".03999[cs.CV]. "},{"name":"uri","data":{"text":[{"name":"text","data":"https://arxiv.org/abs/1804.03999"}],"href":"https://arxiv.org/abs/1804.03999"}}],"title":"Attention U-net: learning where to look for the pancreas"}]},{"id":"R22","label":"22","citation":[{"lang":"en","text":[{"name":"text","data":"CHEN L C"},{"name":"text","data":", "},{"name":"text","data":"ZHU Y K"},{"name":"text","data":", "},{"name":"text","data":"PAPANDREOU G"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Encoder-decoder with atrous separable convolution for semantic image segmentation"},{"name":"text","data":"[C]. "},{"name":"text","data":"Computer Vision-ECCV 2018"},{"name":"text","data":", "},{"name":"text","data":"2018"},{"name":"text","data":": "},{"name":"text","data":"801"},{"name":"text","data":"-"},{"name":"text","data":"818"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1007/978-3-030-01234-2_49"}],"href":"http://dx.doi.org/10.1007/978-3-030-01234-2_49"}}],"title":"Encoder-decoder with atrous separable convolution for semantic image segmentation"}]},{"id":"R23","label":"23","citation":[{"lang":"en","text":[{"name":"text","data":"YANG M K"},{"name":"text","data":", "},{"name":"text","data":"YU K"},{"name":"text","data":", "},{"name":"text","data":"ZHANG C"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"DenseASPP for semantic segmentation in street scenes"},{"name":"text","data":"[C]. "},{"name":"text","data":"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition"},{"name":"text","data":". "},{"name":"text","data":"1823,2018"},{"name":"text","data":", "},{"name":"text","data":"Salt Lake City, UT, USA"},{"name":"text","data":". "},{"name":"text","data":"IEEE"},{"name":"text","data":", "},{"name":"text","data":"2018"},{"name":"text","data":": "},{"name":"text","data":"3684"},{"name":"text","data":"-"},{"name":"text","data":"3692"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/cvpr.2018.00388"}],"href":"http://dx.doi.org/10.1109/cvpr.2018.00388"}}],"title":"DenseASPP for semantic segmentation in street scenes"}]},{"id":"R24","label":"24","citation":[{"lang":"en","text":[{"name":"text","data":"JADON S"},{"name":"text","data":". "},{"name":"text","data":"A survey of loss functions for semantic segmentation"},{"name":"text","data":"[C]."},{"name":"text","data":"2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology"},{"name":"text","data":". "},{"name":"text","data":"2729,2020"},{"name":"text","data":", "},{"name":"text","data":"Via del Mar"},{"name":"text","data":", "},{"name":"text","data":"Chile"},{"name":"text","data":". "},{"name":"text","data":"IEEE"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":": "},{"name":"text","data":"1"},{"name":"text","data":"-"},{"name":"text","data":"7"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/cibcb48159.2020.9277638"}],"href":"http://dx.doi.org/10.1109/cibcb48159.2020.9277638"}}],"title":"A survey of loss functions for semantic segmentation"}]},{"id":"R25","label":"25","citation":[{"lang":"en","text":[{"name":"text","data":"JHA D"},{"name":"text","data":", "},{"name":"text","data":"SMEDSRUD P H"},{"name":"text","data":", "},{"name":"text","data":"RIEGLER M A"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Kvasir-seg: A segmented polyp dataset"},{"name":"text","data":"[C]."},{"name":"text","data":"International Conference on Multimedia Modeling. Springer"},{"name":"text","data":", "},{"name":"text","data":"Cham"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":": "},{"name":"text","data":"451"},{"name":"text","data":"-"},{"name":"text","data":"462"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1007/978-3-030-37734-2_37"}],"href":"http://dx.doi.org/10.1007/978-3-030-37734-2_37"}}],"title":"Kvasir-seg: A segmented polyp dataset"}]},{"id":"R26","label":"26","citation":[{"lang":"en","text":[{"name":"text","data":"BERNAL J"},{"name":"text","data":", "},{"name":"text","data":"SÁNCHEZ F J"},{"name":"text","data":", "},{"name":"text","data":"FERNÁNDEZ-ESPARRACH G"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"WM-DOVA maps for accurate polyp highlighting in colonoscopy: validation vs. saliency maps from physicians"},{"name":"text","data":"[J]. "},{"name":"text","data":"Computerized Medical Imaging and Graphics"},{"name":"text","data":", "},{"name":"text","data":"2015"},{"name":"text","data":", "},{"name":"text","data":"43"},{"name":"text","data":": "},{"name":"text","data":"99"},{"name":"text","data":"-"},{"name":"text","data":"111"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1016/j.compmedimag.2015.02.007"}],"href":"http://dx.doi.org/10.1016/j.compmedimag.2015.02.007"}}],"title":"WM-DOVA maps for accurate polyp highlighting in colonoscopy: validation vs. saliency maps from physicians"}]},{"id":"R27","label":"27","citation":[{"lang":"en","text":[{"name":"text","data":"SILVA J"},{"name":"text","data":", "},{"name":"text","data":"HISTACE A"},{"name":"text","data":", "},{"name":"text","data":"ROMAIN O"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Toward embedded detection of polyps in WCE images for early diagnosis of colorectal cancer"},{"name":"text","data":"[J]. "},{"name":"text","data":"International Journal of Computer Assisted Radiology and Surgery"},{"name":"text","data":", "},{"name":"text","data":"2014"},{"name":"text","data":", "},{"name":"text","data":"9"},{"name":"text","data":"("},{"name":"text","data":"2"},{"name":"text","data":"): "},{"name":"text","data":"283"},{"name":"text","data":"-"},{"name":"text","data":"293"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1007/s11548-013-0926-3"}],"href":"http://dx.doi.org/10.1007/s11548-013-0926-3"}}],"title":"Toward embedded detection of polyps in WCE images for early diagnosis of colorectal cancer"}]},{"id":"R28","label":"28","citation":[{"lang":"en","text":[{"name":"text","data":"BERNAL J"},{"name":"text","data":", "},{"name":"text","data":"TAJKBAKSH N"},{"name":"text","data":", "},{"name":"text","data":"SANCHEZ F J"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Comparative validation of polyp detection methods in video colonoscopy: results from the MICCAI 2015 endoscopic vision challenge"},{"name":"text","data":"[J]. "},{"name":"text","data":"IEEE Transactions on Medical Imaging"},{"name":"text","data":", "},{"name":"text","data":"2017"},{"name":"text","data":", "},{"name":"text","data":"36"},{"name":"text","data":"("},{"name":"text","data":"6"},{"name":"text","data":"): "},{"name":"text","data":"1231"},{"name":"text","data":"-"},{"name":"text","data":"1249"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/tmi.2017.2664042"}],"href":"http://dx.doi.org/10.1109/tmi.2017.2664042"}}],"title":"Comparative validation of polyp detection methods in video colonoscopy: results from the MICCAI 2015 endoscopic vision challenge"}]},{"id":"R29","label":"29","citation":[{"lang":"en","text":[{"name":"text","data":"CODELLA N C F"},{"name":"text","data":", "},{"name":"text","data":"GUTMAN D"},{"name":"text","data":", "},{"name":"text","data":"CELEBI M E"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Skin lesion analysis toward melanoma detection: a challenge at the 2017 International symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC)"},{"name":"text","data":"[C]. "},{"name":"text","data":"2018 IEEE 15th International Symposium on Biomedical Imaging"},{"name":"text","data":". "},{"name":"text","data":"47,2018"},{"name":"text","data":", "},{"name":"text","data":"Washington, DC, USA"},{"name":"text","data":". "},{"name":"text","data":"IEEE"},{"name":"text","data":", "},{"name":"text","data":"2018"},{"name":"text","data":": "},{"name":"text","data":"168"},{"name":"text","data":"-"},{"name":"text","data":"172"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/isbi.2018.8363547"}],"href":"http://dx.doi.org/10.1109/isbi.2018.8363547"}}],"title":"Skin lesion analysis toward melanoma detection: a challenge at the 2017 International symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC)"}]},{"id":"R30","label":"30","citation":[{"lang":"en","text":[{"name":"text","data":"ARNOLD M"},{"name":"text","data":", "},{"name":"text","data":"GHOSH A"},{"name":"text","data":", "},{"name":"text","data":"AMELING S"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Automatic segmentation and inpainting of specular highlights for endoscopic imaging"},{"name":"text","data":"[J]. "},{"name":"text","data":"EURASIP Journal on Image and Video Processing"},{"name":"text","data":", "},{"name":"text","data":"2010"},{"name":"text","data":", "},{"name":"text","data":"2010"},{"name":"text","data":": "},{"name":"text","data":"1"},{"name":"text","data":"-"},{"name":"text","data":"12"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1155/2010/814319"}],"href":"http://dx.doi.org/10.1155/2010/814319"}}],"title":"Automatic segmentation and inpainting of specular highlights for endoscopic imaging"}]},{"id":"R31","label":"31","citation":[{"lang":"en","text":[{"name":"text","data":"KRÄHENBÜHL P"},{"name":"text","data":", "},{"name":"text","data":"KOLTUN V"},{"name":"text","data":". "},{"name":"text","data":"Efficient inference in fully connected CRFs with Gaussian edge potentials"},{"name":"text","data":"[J]. "},{"name":"text","data":"Advances in neural information processing systems"},{"name":"text","data":", "},{"name":"text","data":"2011"},{"name":"text","data":": "},{"name":"text","data":"109"},{"name":"text","data":"-"},{"name":"text","data":"117"},{"name":"text","data":"."}],"title":"Efficient inference in fully connected CRFs with Gaussian edge potentials"}]},{"id":"R32","label":"32","citation":[{"lang":"zh","text":[{"name":"text","data":"侯腾璇"},{"name":"text","data":", "},{"name":"text","data":"赵涓涓"},{"name":"text","data":", "},{"name":"text","data":"强彦"},{"name":"text","data":", "},{"name":"text","data":"等"},{"name":"text","data":". "},{"name":"text","data":"CRF 3D-UNet肺结节分割网络"},{"name":"text","data":"[J]. "},{"name":"text","data":"计算机工程与设计"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"41"},{"name":"text","data":"("},{"name":"text","data":"6"},{"name":"text","data":"): "},{"name":"text","data":"1663"},{"name":"text","data":"-"},{"name":"text","data":"1669"},{"name":"text","data":"."}],"title":"CRF 3D-UNet肺结节分割网络"},{"lang":"en","text":[{"name":"text","data":"HOU T X"},{"name":"text","data":", "},{"name":"text","data":"ZHAO J J"},{"name":"text","data":", "},{"name":"text","data":"QIANG Y"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Pulmonary nodules segmentation based on CRF 3D-UNet structure"},{"name":"text","data":"[J]. "},{"name":"text","data":"Computer Engineering and Design"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"41"},{"name":"text","data":"("},{"name":"text","data":"6"},{"name":"text","data":"): "},{"name":"text","data":"1663"},{"name":"text","data":"-"},{"name":"text","data":"1669"},{"name":"text","data":"."},{"name":"text","data":"(in Chinese)"}],"title":"Pulmonary nodules segmentation based on CRF 3D-UNet structure"}]},{"id":"R33","label":"33","citation":[{"lang":"en","text":[{"name":"text","data":"QADIR H A"},{"name":"text","data":", "},{"name":"text","data":"SHIN Y"},{"name":"text","data":", "},{"name":"text","data":"SOLHUSVIK J"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Polyp detection and segmentation using mask R-CNN: does a deeper feature extractor CNN always perform better?"},{"name":"text","data":" [C]. "},{"name":"text","data":"2019 13th International Symposium on Medical Information and Communication Technology (ISMICT)."},{"name":"text","data":"810,2019"},{"name":"text","data":", Oslo, Norway. "},{"name":"text","data":"IEEE"},{"name":"text","data":", "},{"name":"text","data":"2019"},{"name":"text","data":": "},{"name":"text","data":"1"},{"name":"text","data":"-"},{"name":"text","data":"6"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/ismict.2019.8743694"}],"href":"http://dx.doi.org/10.1109/ismict.2019.8743694"}}],"title":"Polyp detection and segmentation using mask R-CNN: does a deeper feature extractor CNN always perform better?"}]},{"id":"R34","label":"34","citation":[{"lang":"en","text":[{"name":"text","data":"IBTEHAZ N"},{"name":"text","data":", "},{"name":"text","data":"RAHMAN M S"},{"name":"text","data":". "},{"name":"text","data":"MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation"},{"name":"text","data":"[J]. "},{"name":"text","data":"Neural Networks"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"121"},{"name":"text","data":": "},{"name":"text","data":"74"},{"name":"text","data":"-"},{"name":"text","data":"87"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1016/j.neunet.2019.08.025"}],"href":"http://dx.doi.org/10.1016/j.neunet.2019.08.025"}}],"title":"MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation"}]}]},"response":[],"contributions":[],"acknowledgements":[],"conflict":[],"supportedby":[],"articlemeta":{"doi":"10.37188/OPE.20223008.0970","clc":[[{"name":"text","data":"TP391.4"}]],"dc":[{"name":"text","data":"A"}],"publisherid":"1004-924X(2022)08-0970-14","citeme":[{"data":[{"name":"text","data":"徐昌佳,易见兵,曹锋等.采用DoubleUNet网络的结直肠息肉分割算法[J].光学精密工程,2022,30(08):970-983."}],"text":"徐昌佳,易见兵,曹锋等.采用DoubleUNet网络的结直肠息肉分割算法[J].光学精密工程,2022,30(08):970-983."},{"data":[{"name":"text","data":"XU Changjia,YI Jianbing,CAO Feng,et al.Colorectal polyp segmentation algorithm using DoubleUNet network[J].Optics and Precision Engineering,2022,30(08):970-983."}],"text":"XU Changjia,YI Jianbing,CAO Feng,et al.Colorectal polyp segmentation algorithm using DoubleUNet network[J].Optics and Precision Engineering,2022,30(08):970-983."}],"fundinggroup":[{"lang":"zh","text":[{"name":"text","data":"国家自然科学基金项目(No.61862031);江西省自然科学基金项目(No.20181BAB202004);江西省教育厅科技项目(No.GJJ190458,No.GJJ200818);赣州市科技计划项目(No.GZKJ20206030)"}]}],"history":{"received":"2021-05-16","revised":"2021-06-24","ppub":"2022-04-25","opub":"2022-04-26"}},"appendix":[],"type":"research-article","ethics":[],"backSec":[],"supplementary":[],"journalTitle":"光学精密工程","issue":"8","volume":"30","originalSource":[]}