浏览全部资源
扫码关注微信
1.东南大学 生物科学与医学工程学院,江苏 南京 210016
2.南京医科大学第一附属医院,江苏 南京 210029
[ "刘雪岩(1996-),男,吉林长春人,硕士研究生,2019年于东南大学获得学士学位,主要研究方向为图像处理及深度学习。E-mail:xueyanliu@seu.edu.cn" ]
[ "周 平(1980-),男,辽宁凌源人,副教授,硕士生导师,2002年、2007年于中国科学技术大学分别获得学士和博士学位,主要研究方向为图像处理及三维成像。E-mail:capzhou@163.com" ]
收稿日期:2021-05-22,
修回日期:2021-07-08,
纸质出版日期:2022-03-10
移动端阅览
刘雪岩,许聿达,雷建昕等.基于视差放大与超分辨率的三维光场腹腔镜标定[J].光学精密工程,2022,30(05):510-517.
LIU Xueyan,XU Yuda,LEI Jianxin,et al.Three-dimensional light field endoscope calibration based on light field disparity amplifier and super-resolution network[J].Optics and Precision Engineering,2022,30(05):510-517.
刘雪岩,许聿达,雷建昕等.基于视差放大与超分辨率的三维光场腹腔镜标定[J].光学精密工程,2022,30(05):510-517. DOI: 10.37188/OPE.2021.0332.
LIU Xueyan,XU Yuda,LEI Jianxin,et al.Three-dimensional light field endoscope calibration based on light field disparity amplifier and super-resolution network[J].Optics and Precision Engineering,2022,30(05):510-517. DOI: 10.37188/OPE.2021.0332.
基于光场成像理论的三维腹腔镜是实现腹腔三维成像的重要研究方向,标定是光场腹腔镜实现三维成像的基础。针对光场带宽积与较小的光场视差会制约三维光场腹腔镜标定精度提升的问题,提出了一种视差放大方法,将直接计算光场视差的问题转换为间接计算两个特征点的距离问题,较大的特征点间距在光场图像中体现为易于检测的点-点间距和点-线间距,从根本上提高了光场腹腔镜的标定精度;基于SRDenseNet的改进型超分辨率网络,整合通道注意力机制,同时提高了光场腹腔镜四维成像的角度分辨率与位置分辨率,间接地提高了光场腹腔镜的标定精度。实验结果表明,经光场视差放大与超分辨处理后,三维光场腹腔镜标定的反投影误差降低了16%,
R
2
提高了6%。
Three-dimensional (3D) light field imaging in laparoscopic surgery is an emerging technology, which has the potential to enable 3D imaging. Calibration is fundamental for the 3D light field endoscope (LFE) to accomplish 3D imaging and is essential but challenging as the light field bandwidth product is limited; moreover, the light field disparity of a 3D LFE is smaller than that of the conventional light field camera, which makes it difficult to achieve acceptable light field calibration results. In this paper, the small light field disparity was amplified by computing the distance between two feature points in a 3D scene. Compared with the conventional light field disparity between different feature points in different sub-aperture images, the distance between the two feature points enlarges the distance, i.e., the disparity between point-to-point and point-to-line, in the light field image, which leads to better calibration accuracy. Furthermore, an improved super-resolution network based on SRDenseNet was proposed, where cascaded channel attention dense blocks were applied to acquire the features of low-resolution light field images. The super-resolution network improved the two-dimensional (2D) spatial resolution and 2D angular resolution in the 4D light field data simultaneously and improved 3D LFE calibration accuracy indirectly. The experimental results show that the amplified light field disparity and higher resolution facilitated a higher calibration performance, and the re-projection error of the 3D LFE calibration decreased by 16%, while the
R
-square increased by 6%.
LIU J D , CLAUS D , XU T F , et al . Light field endoscopy and its parametric description [J]. Optics Letters , 2017 , 42 ( 9 ): 1804 - 1807 . doi: 10.1364/ol.42.001804 http://dx.doi.org/10.1364/ol.42.001804
DANSEREAU D G , PIZARRO O , WILLIAMS S B . Decoding, calibration and rectification for lenselet-based plenoptic cameras [C]. 2013 IEEE Conference on Computer Vision and Pattern Recognition . 2328,2013 , Portland , OR , USA . IEEE , 2013 : 1027 - 1034 . doi: 10.1109/cvpr.2013.137 http://dx.doi.org/10.1109/cvpr.2013.137
ZHANG Q , ZHANG C P , LING J B , et al . A generic multi-projection-center model and calibration method for light field cameras [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2019 , 41 ( 11 ): 2539 - 2552 . doi: 10.1109/TPAMI.2018.2864617 http://dx.doi.org/10.1109/TPAMI.2018.2864617
ZHOU P , CAI W J , YU Y L , et al . A two-step calibration method of lenslet-based light field cameras [J]. Optics and Lasers in Engineering , 2019 , 115 : 190 - 196 . doi: 10.1016/j.optlaseng.2018.11.024 http://dx.doi.org/10.1016/j.optlaseng.2018.11.024
朱福珍 , 刘越 , 黄鑫 , 等 . 改进的稀疏表示遥感图像超分辨重建 [J]. 光学 精密工程 , 2019 , 27 ( 3 ): 718 - 725 . doi: 10.13482/j.issn1001-7011.2019.05.004 http://dx.doi.org/10.13482/j.issn1001-7011.2019.05.004
ZHU F ZH , LIU Y , HUANG X , et al . Remote sensing image super-resolution based on improved sparse representation [J]. Opt. Precision Eng. , 2019 , 27 ( 3 ): 718 - 725 . (in Chinese) . doi: 10.13482/j.issn1001-7011.2019.05.004 http://dx.doi.org/10.13482/j.issn1001-7011.2019.05.004
魏楚亮 , 陈儒林 , 高谦 , 等 . 基于高层次融合的卷积神经网络 FPGA硬件加速 [J]. 光学 精密工程 , 2020 , 28 ( 5 ): 1212 - 1219 . doi: 10.3788/OPE.20202805.1212 http://dx.doi.org/10.3788/OPE.20202805.1212
WEI CH L , CHEN R L , GAO Q , et al . FPGA-based hardware acceleration for CNNs developed using high-Level synthesis [J]. Opt. Precision Eng. , 2020 , 28 ( 5 ): 1212 - 1219 . (in Chinese) . doi: 10.3788/OPE.20202805.1212 http://dx.doi.org/10.3788/OPE.20202805.1212
范丽丽 , 赵宏伟 , 赵浩宇 , 等 . 基于深度卷积神经网络的目标检测研究综述 [J]. 光学 精密工程 , 2020 , 28 ( 5 ): 1152 - 1164 .
FAN L L , ZHAO H W , ZHAO H Y , et al . Survey of target detection based on deep convolutional neural networks [J]. Opt. Precision Eng. , 2020 , 28 ( 5 ): 1152 - 1164 . (in Chinese)
DONG C , LOY C C , HE K M , et al . Learning a deep convolutional network for image super-resolution [C]. Computer Vision-ECCV 2014 , 2014 : 184 - 199 . doi: 10.1007/978-3-319-10593-2_13 http://dx.doi.org/10.1007/978-3-319-10593-2_13
KIM J , LEE J K , LEE K M . Accurate image super-resolution using very deep convolutional networks [C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2730,2016 , Las Vegas, NV, USA. IEEE , 2016 : 1646 - 1654 . doi: 10.1109/cvpr.2016.182 http://dx.doi.org/10.1109/cvpr.2016.182
TONG T , LI G , LIU X J , et al . Image super-resolution using dense skip connections [C]. 2017 IEEE International Conference on Computer Vision (ICCV). 2229,2017 , Venice, Italy. IEEE , 2017 : 4809 - 4817 . doi: 10.1109/iccv.2017.514 http://dx.doi.org/10.1109/iccv.2017.514
LEDIG C , THEIS L , HUSZÁR F , et al . Photo-realistic single image super-resolution using a generative adversarial network [C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2126,2017 , Honolulu, HI, USA. IEEE , 2017 : 105 - 114 . doi: 10.1109/cvpr.2017.19 http://dx.doi.org/10.1109/cvpr.2017.19
DONG C , LOY C C , TANG X O . Accelerating the super-resolution convolutional neural network [C]. Computer Vision-ECCV 2016 , 2016 : 391 - 407 . doi: 10.1007/978-3-319-46475-6_25 http://dx.doi.org/10.1007/978-3-319-46475-6_25
SHI W Z , CABALLERO J , HUSZÁR F , et al . Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network [C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2730,2016 , Las Vegas, NV, USA. IEEE , 2016 : 1874 - 1883 . doi: 10.1109/cvpr.2016.207 http://dx.doi.org/10.1109/cvpr.2016.207
JOHNSON J , ALAHI A , LI F F . Perceptual losses for real-time style transfer and super-resolution [C]. Computer Vision-ECCV 2016 , 2016 : 694 - 711 . doi: 10.1007/978-3-319-46475-6_43 http://dx.doi.org/10.1007/978-3-319-46475-6_43
ZHOU P , GU C , ZHANG W Z , et al . Light field endoscope calibration based on virtual objective lens and virtual feature points [J]. Optical Engineering , 2020 , 59 ( 10 ): 104101 . doi: 10.1117/1.OE.59.10.104101 http://dx.doi.org/10.1117/1.OE.59.10.104101
ZHANG Y L , LI K P , LI K , et al . Image super-resolution using very deep residual channel attention networks [C]. Computer Vision-ECCV 2018 , 2018 : 286 - 301 . doi: 10.1007/978-3-030-01234-2_18 http://dx.doi.org/10.1007/978-3-030-01234-2_18
0
浏览量
686
下载量
5
CSCD
关联资源
相关文章
相关作者
相关机构