1.长春理工大学 计算机科学技术学院,吉林 长春 130022
扫 描 看 全 文
陈纯毅,范晓辉,胡小娟等.融合3D对极平面图像的光场角度超分辨重建[J].光学精密工程,2023,31(21):3167-3177.
CHEN Chunyi,FAN Xiaohui,HU Xiaojuan,et al.Light-field angular super-resolution reconstruction via fusing 3D epipolar plane images[J].Optics and Precision Engineering,2023,31(21):3167-3177.
陈纯毅,范晓辉,胡小娟等.融合3D对极平面图像的光场角度超分辨重建[J].光学精密工程,2023,31(21):3167-3177. DOI: 10.37188/OPE.20233121.3167.
CHEN Chunyi,FAN Xiaohui,HU Xiaojuan,et al.Light-field angular super-resolution reconstruction via fusing 3D epipolar plane images[J].Optics and Precision Engineering,2023,31(21):3167-3177. DOI: 10.37188/OPE.20233121.3167.
针对光场成像中因硬件限制而造成的光场图像角度分辨率低的问题,提出一种融合3D对极平面图像的光场角度超分辨重建方法。该方法首先将输入图像按不同的视差方向排列分别进行特征提取,以充分利用输入图像的视差信息,提高深度估计的准确性。利用深度图将输入图像映射到新视角位置,生成初始合成光场。为了使重建光场图像能够保持更好的细节信息及几何一致性,先通过水平3D对极平面图像融合重建分支和垂直3D对极平面图像融合重建分支,分别对初始合成光场进行水平融合重建和垂直融合重建,再将两个结果进行混合重建,生成最终的高角度分辨率光场图像。实验结果表明:相比于现有方法,本文方法在合成光场数据集和真实光场数据集上的重建效果均得到了提高,峰值信噪比的提升幅度最高达1.99%,有效地提高了重建光场的质量。
Light-field (LF) imaging can capture spatial and angular information of light rays in a scene. Compared to traditional 2D/3D images, LF images provide a more comprehensive description of the scene. To address the problem of low angular resolution in LF images caused by hardware constraints, a LF angular super-resolution reconstruction method based on the fusion of 3D epipolar plane images (EPIs) is proposed. First, to make full use of the parallax information of the input images and improve the accuracy of depth estimation, the input images are arranged in varying parallax directions, and their features are extracted individually. Then, initial synthetic LF images are generated by transforming the input images to match the new viewpoint location using the corresponding depth maps. Finally, to ensure that the reconstructed LF image retains better detail information and geometric consistency, the LF is reconstructed horizontally and vertically via the horizontal and vertical 3D EPI fusion reconstruction branches, respectively. These two reconstruction results are then fused to produce the final high-angular-resolution LF image. Experimental results show that, compared to existing methods, the proposed method achieves an improved reconstruction quality across both synthetic and real-world LF image datasets, and the maximum increase in the peak signal-to-noise ratio reaches 1.99%. Thus, the proposed method can effectively improve the quality of the reconstructed LF.
光场超分辨重建3D对极平面图像卷积神经网络
light fieldsuper-resolution reconstruction3D epipolar plane imageconvolution neural network
LEVOY M, HANRAHAN P. Light field rendering[C]. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. New York: ACM, 1996: 31-42. doi: 10.1145/237170.237199http://dx.doi.org/10.1145/237170.237199
刘雪岩, 许聿达, 雷建昕, 等. 基于视差放大与超分辨率的三维光场腹腔镜标定[J]. 光学 精密工程, 2022, 30(5):510-517. doi: 10.37188/OPE.2021.0332http://dx.doi.org/10.37188/OPE.2021.0332
LIU X Y, XU Y D, LEI J X, et al. Three-dimensional light field endoscope calibration based on light field disparity amplifier and super-resolution network[J]. Optics and Precision Engineering, 2022, 30(5):510-517.(in Chinese). doi: 10.37188/OPE.2021.0332http://dx.doi.org/10.37188/OPE.2021.0332
池汉彬, 段辉高, 胡跃强. 超构表面在三维成像与显示技术中的应用[J]. 光学 精密工程, 2022, 30(15):1775-1801. doi: 10.37188/ope.20223015.1775http://dx.doi.org/10.37188/ope.20223015.1775
CHI H B, DUAN H G, HU Y Q. Application of metasurfaces in three-dimensonal imaging and display[J]. Optics and Precision Engineering, 2022, 30(15):1775-1801.(in Chinese). doi: 10.37188/ope.20223015.1775http://dx.doi.org/10.37188/ope.20223015.1775
吕晓波,刘宇丰,李毅威,等. 快照式光谱光场成像技术[J]. 光学精密工程,2021,29(2):220-230. doi: 10.37188/ope.20212902.0220http://dx.doi.org/10.37188/ope.20212902.0220
LÜ X B, LIU Y F, LI Y W, et al. Snapshot spectral light-field imaging technology[J]. Optics and Precision Engineering, 2021, 29(2): 220-230. (in Chinese). doi: 10.37188/ope.20212902.0220http://dx.doi.org/10.37188/ope.20212902.0220
SHI L X, HASSANIEH H, DAVIS A, et al. Light field reconstruction using sparsity in the continuous Fourier domain[J]. ACM Transactions on Graphics, 2014, 34(1): 1-13. doi: 10.1145/2682631http://dx.doi.org/10.1145/2682631
VAGHARSHAKYAN S, BREGOVIC R, GOTCHEV A. Light field reconstruction using shearlet transform[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(1): 133-147. doi: 10.1109/tpami.2017.2653101http://dx.doi.org/10.1109/tpami.2017.2653101
YOON Y, JEON H G, YOO D, et al. Learning a deep convolutional network for light-field image super-resolution[C]. 2015 IEEE International Conference on Computer Vision Workshop (ICCVW).7-13,2015, Santiago, Chile. IEEE, 2016: 57-65. doi: 10.1109/iccvw.2015.17http://dx.doi.org/10.1109/iccvw.2015.17
WU G C, ZHAO M D, WANG L Y, et al. Light field reconstruction using deep convolutional network on EPI[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).21-26,2017, Honolulu, HI, USA. IEEE, 2017: 1638-1646. doi: 10.1109/cvpr.2017.178http://dx.doi.org/10.1109/cvpr.2017.178
WANG Y L, LIU F, WANG Z L, et al. End-to-end View Synthesis for Light Field Imaging with Pseudo 4DCNN[M]. Computer Vision - ECCV 2018. Cham: Springer International Publishing, 2018: 340-355. doi: 10.1007/978-3-030-01216-8_21http://dx.doi.org/10.1007/978-3-030-01216-8_21
WANG Y L, LIU F, ZHANG K B, et al. High-fidelity view synthesis for light field imaging with extended pseudo 4DCNN[J]. IEEE Transactions on Computational Imaging, 2020, 6: 830-842. doi: 10.1109/tci.2020.2986092http://dx.doi.org/10.1109/tci.2020.2986092
SALEM A, IBRAHEM H, KANG H S. RCA-LF: dense light field reconstruction using residual channel attention networks[J]. Sensors, 2022, 22(14): 5254. doi: 10.3390/s22145254http://dx.doi.org/10.3390/s22145254
WANNER S, GOLDLUECKE B. Variational light field analysis for disparity estimation and super-resolution[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(3): 606-619. doi: 10.1109/tpami.2013.147http://dx.doi.org/10.1109/tpami.2013.147
PENNER E, ZHANG L. Soft 3D reconstruction for view synthesis[J]. ACM Transactions on Graphics, 2017, 36(6): 1-11. doi: 10.1145/3130800.3130855http://dx.doi.org/10.1145/3130800.3130855
KALANTARI N K, WANG T C, RAMAMOORTHI R. Learning-based view synthesis for light field cameras[J]. ACM Transactions on Graphics, 2016, 35(6): 1-10. doi: 10.1145/2980179.2980251http://dx.doi.org/10.1145/2980179.2980251
VADATHYA A K, GIRISH S, MITRA K. A deep learning framework for light field reconstruction from focus-defocus pair: a minimal hardware approach[C]. Computational Cameras and Displays Workshop.22, 2018, Salt Lake City, UT, USA. IEEE, 2018. doi: 10.13140/RG.2.2.33094.78408http://dx.doi.org/10.13140/RG.2.2.33094.78408
JIN J, HOU J H, YUAN H, et al. Learning light field angular super-resolution via a geometry-aware network[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 11141-11148. doi: 10.1609/aaai.v34i07.6771http://dx.doi.org/10.1609/aaai.v34i07.6771
NAVARRO J, SABATER N. Learning occlusion-aware view synthesis for light fields[J].Pattern Analysis and Applications, 2021, 24(3): 1319-1334. doi: 10.1007/s10044-021-00956-2http://dx.doi.org/10.1007/s10044-021-00956-2
GUL M S K, MUKATI M U, BÄTZ M, et al. Light-field view synthesis using A convolutional block attention module[C].2021 IEEE International Conference on Image Processing (ICIP).19-22,2021, Anchorage, AK, USA. IEEE, 2021: 3398-3402. doi: 10.1109/icip42928.2021.9506586http://dx.doi.org/10.1109/icip42928.2021.9506586
YUN S, JANG J, PAIK J. Geometry-aware light field angular super resolution using multiple receptive field network[C].2022 International Conference on Electronics, Information, and Communication (ICEIC).6-9,2022, Jeju, Korea, Republic of. IEEE, 2022: 1-3. doi: 10.1109/iceic54506.2022.9748458http://dx.doi.org/10.1109/iceic54506.2022.9748458
WANNER S, GOLDLUECKE B. Globally consistent depth labeling of 4D light fields[C].2012 IEEE Conference on Computer Vision and Pattern Recognition. 16-21,2012, Providence, RI, USA. IEEE, 2012: 41-48. doi: 10.1109/cvpr.2012.6247656http://dx.doi.org/10.1109/cvpr.2012.6247656
NG R, LEVOY M, BRÉDIF M, et al. Light Field Photography with a Hand-held Plenoptic Camera[D]. California: Stanford University, 2005.
SHIN C, JEON H G, YOON Y, et al. EPINET: a fully-convolutional neural network using epipolar geometry for depth from light field images[C].2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18-23,2018, Salt Lake City, UT, USA. IEEE, 2018: 4748-4757. doi: 10.1109/cvpr.2018.00499http://dx.doi.org/10.1109/cvpr.2018.00499
WU G C, LIU Y B, DAI Q H, et al. Learning sheared EPI structure for light field reconstruction[J]. IEEE Transactions on Image Processing, 2019, 28(7): 3261-3273. doi: 10.1109/tip.2019.2895463http://dx.doi.org/10.1109/tip.2019.2895463
HONAUER K, JOHANNSEN O, KONDERMANN D, et al. A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields[M]. Computer Vision-ACCV 2016. Cham: Springer International Publishing, 2017: 19-34. doi: 10.1007/978-3-319-54187-7_2http://dx.doi.org/10.1007/978-3-319-54187-7_2
WANNER S, MEISTER S, GOLDLUECKE B. Datasets and benchmarks for densely sampled 4D light fields[C]. 18th International Workshop on Vision, Modeling, and Visualization.11-13,2013, Lugano, Switzerland. The Eurographics Association, 2013, 13: 225-226.
RAJ A S, LOWNEY M, SHAH R, et al. Stanford Lytro Light Field Archive[EB/OL]. (2016-10)[2022-12-29]. http://lightfields.stanford.edu/LF2016.htmlhttp://lightfields.stanford.edu/LF2016.html.
0
Views
9
下载量
0
CSCD
Publicity Resources
Related Articles
Related Author
Related Institution