浏览全部资源
扫码关注微信
1.辽宁工程技术大学 软件学院,辽宁 葫芦岛 125100
2.光电信息控制和安全技术重点实验室, 天津 300308
Published:25 May 2024,
Received:21 November 2023,
Revised:05 January 2024,
移动端阅览
袁姮,王笑雪,颜廷昊等.双路混合注意力的跨层次特征聚合图像增强[J].光学精密工程,2024,32(10):1538-1551.
YUAN Heng,WANG Xiaoxue,YAN Tinghao,et al.Cross-level feature aggregation image enhancement with dual-path hybrid attention[J].Optics and Precision Engineering,2024,32(10):1538-1551.
袁姮,王笑雪,颜廷昊等.双路混合注意力的跨层次特征聚合图像增强[J].光学精密工程,2024,32(10):1538-1551. DOI: 10.37188/OPE.20243210.1538.
YUAN Heng,WANG Xiaoxue,YAN Tinghao,et al.Cross-level feature aggregation image enhancement with dual-path hybrid attention[J].Optics and Precision Engineering,2024,32(10):1538-1551. DOI: 10.37188/OPE.20243210.1538.
针对低照度图像存在亮度低、噪声大、颜色偏差和细节纹理丢失等问题,提出一种双路混合注意力的跨层次特征聚合图像增强方法。首先,设计多尺度双路注意力残差模块(Multi-scale Dual-path Attention Residual module, MDAR),MDAR包括并行多尺度特征采样块(Parallel Multi-scale Feature Sampling Block, PMFB)和双路混合注意力块(Dual-path Hybrid Attention Block, DHAB)。其中PMFB用于提取和融合多尺度特征信息,促进局部特征的全局化表示,使图像细节信息得到有效增强,而DHAB能够对图像噪声区域和颜色信息给予更大关注,缓解不同注意力间特征的差异,有效抑制噪声,提高图像质量。此外,设计跨层次特征聚合模块(Cross-level Feature Aggregation Module, CFAM),将不同层次特征进行融合,弥补深层特征与浅层特征之间的差异,强化对浅层特征的感知,实现图像增强。实验结果表明,所提方法在LOL数据集上的PSNR,SSIM,LPIPS和NIQE分别达到了22.347 dB,0.850,0.178和4.153;在MIT-Adobe 5K数据集上的PSNR,SSIM,LPIPS和NIQE分别达到了22.703 dB,0.903,0.137和3.822。与其他算法相比均有较大提升,证明了所提方法的有效性。
To address the problems of low brightness, high noise, color deviation and loss of detail and texture in low-light images, this study proposed an image enhancement method using dual-channel hybrid attention and cross-level feature aggregation. Firstly, the Multi-scale dual-path attention residual module (MDAR) was designed. MDAR included a Parallel multi-scale feature sampling block (PMFB) and a Dual-path hybrid attention block (DHAB). By extracting and fusing multi-scale feature information, PMFB promoted the global representation of local features, and effectively enhanced image details. DHAB could pay more attention to image noise regions and color information, which not only alleviates the feature differences between different attention spans, but also effectively suppress noise and improve image quality. In addition, this paper designed a Cross-level feature aggregation module (CFAM), which fuses features at different levels to make up for the differences between deep features and shallow features, strengthen the perception of shallow features, and achieve image enhancement. Experimental results indicate that the PSNR, SSIM, LPIPS and NIQE of the proposed method on the LOL dataset reached 22.347 dB, 0.850, 0.178 and 4.153 respectively and the PSNR, SSIM, LPIPS and NIQE of the proposed method on the MIT-Adobe 5K dataset reached 22.703 dB, 0.903, 0.137 and 3.822 respectively. Compared with other algorithms, the algorithm in this paper has been greatly improved, which proves the effectiveness of the proposed method.
图像增强多尺度混合注意力特征聚合
image enhancementmulti-scalemixed attentioncharacteristic polymerization
PISANO E D, ZONG S Q, HEMMINGER B M, et al. Contrast Limited Adaptive Histogram Equalization image processing to improve the detection of simulated spiculations in dense mammograms[J]. Journal of Digital Imaging, 1998, 11(4): 193. doi: 10.1007/bf03178082http://dx.doi.org/10.1007/bf03178082
IBRAHIM H, KONG N S P. Brightness preserving dynamic histogram equalization for image contrast enhancement[J]. IEEE Transactions on Consumer Electronics, 2007, 53(4): 1752-1758. doi: 10.1109/tce.2007.4429280http://dx.doi.org/10.1109/tce.2007.4429280
ZUIDERVELD K. Contrast Limited Adaptive Histogram Equalization[M]. Graphics Gems. Amsterdam: Elsevier, 1994: 474-485. doi: 10.1016/b978-0-12-336156-1.50061-6http://dx.doi.org/10.1016/b978-0-12-336156-1.50061-6
LAND E H, MCCANN J J. Lightness and retinex theory[J]. Journal of the Optical Society of America, 1971, 61(1): 1-11. doi: 10.1364/josa.61.000001http://dx.doi.org/10.1364/josa.61.000001
GUO X J, LI Y, LING H B. LIME: low-light image enhancement via illumination map estimation[J]. IEEE Transactions on Image Processing, 2017, 26(2): 982-993. doi: 10.1109/tip.2016.2639450http://dx.doi.org/10.1109/tip.2016.2639450
ZHANG Q, NIE Y W, ZHU L, et al. Enhancing underexposed photos using perceptually bidirectional similarity[J]. IEEE Transactions on Multimedia, 2045, 23: 189-202.
LI M D, LIU J Y, YANG W H, et al. Structure-revealing low-light image enhancement via robust retinex model[J]. IEEE Transactions on Image Processing, 2018, 27(6): 2828-2841. doi: 10.1109/tip.2018.2810539http://dx.doi.org/10.1109/tip.2018.2810539
YING Z Q, LI G, GAO W. A bio-inspired multi-exposure fusion framework for low-light image enhancement[J]. ArXiv e-Prints, 2017: arXiv: 1711.00591.
LORE K G, AKINTAYO A, SARKAR S. LLNet: a deep autoencoder approach to natural low-light image enhancement[J]. Pattern Recognition, 2017, 61: 650-662. doi: 10.1016/j.patcog.2016.06.008http://dx.doi.org/10.1016/j.patcog.2016.06.008
WEI C, WANG W J, YANG W H, et al. Deep Retinex Decomposition for Low-Light Enhancement[EB/OL]. 2018: arXiv: 1808.04560. http://arxiv.org/abs/1808.04560http://arxiv.org/abs/1808.04560
ZHANG Y H, ZHANG J W, GUO X J. Kindling the darkness: a practical low-light image enhancer[C]. Proceedings of the 27th ACM International Conference on Multimedia. Nice France. ACM, 2019: 1632-1640. doi: 10.1145/3343031.3350926http://dx.doi.org/10.1145/3343031.3350926
WU W H, WENG J, ZHANG P P, et al. URetinex-Net: retinex-based deep unfolding network for low-light image enhancement[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, LA, USA. IEEE, 2022: 5891-5900. doi: 10.1109/cvpr52688.2022.00581http://dx.doi.org/10.1109/cvpr52688.2022.00581
王殿伟, 邢质斌, 韩鹏飞, 等. 基于模拟多曝光融合的低照度全景图像增强[J]. 光学 精密工程, 2021, 29(2): 349-362. doi: 10.37188/OPE.20212902.0349http://dx.doi.org/10.37188/OPE.20212902.0349
WANG D W, XING Z B, HAN P F, et al. Low illumination panoramic image enhancement algorithm based on simulated multi-exposure fusion[J]. Opt. Precision Eng., 2021, 29(2): 349-362.(in Chinese). doi: 10.37188/OPE.20212902.0349http://dx.doi.org/10.37188/OPE.20212902.0349
LIM S, KIM W. DSLR: deep stacked Laplacian restorer for low-light image enhancement[J]. IEEE Transactions on Multimedia, 2021, 23: 4272-4284. doi: 10.1109/tmm.2020.3039361http://dx.doi.org/10.1109/tmm.2020.3039361
WANG Z D, CUN X D, BAO J M, et al. Uformer: a general u-shaped transformer for image restoration[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, LA, USA. IEEE, 2022: 17662-17672. doi: 10.1109/cvpr52688.2022.01716http://dx.doi.org/10.1109/cvpr52688.2022.01716
LI L, LIANG D, GAO Y, et al. ALL-E: Aesthetics-Guided Low-Light Image Enhancement[J]. ArXiv preprint, 2023, arXiv:2304.14610. doi: 10.24963/ijcai.2023/118http://dx.doi.org/10.24963/ijcai.2023/118
QU J X, LIU R W, GAO Y, et al. Double domain guided real-time low-light image enhancement for ultra-high-definition transportation surveillance[J]. IEEE Transactions on Intelligent Transportation Systems, 2024, PP(99): 1-13. doi: 10.1109/tits.2024.3359755http://dx.doi.org/10.1109/tits.2024.3359755
刘光辉, 杨琦, 孟月波, 等. 一种并行混合注意力的渐进融合图像增强方法[J]. 光电工程, 2023, 50(4): 220231.
LIU G H, YANG Q, MENG Y B, et al. A progressive fusion image enhancement method with parallel hybrid attention[J]. Opto-Electronic Engineering, 2023, 50(4): 220231.(in Chinese)
陈清江, 顾媛. 多通道融合注意力网络的低照度图像增强[J]. 光学 精密工程, 2023, 31(14): 2111-2122. doi: 10.37188/OPE.20233114.2111http://dx.doi.org/10.37188/OPE.20233114.2111
CHEN Q J, GU Y. Low-light image enhancement algorithm based on multi-channel fusion attention network[J]. Opt. Precision Eng., 2023, 31(14): 2111-2122.(in Chinese). doi: 10.37188/OPE.20233114.2111http://dx.doi.org/10.37188/OPE.20233114.2111
JIANG Y F, GONG X Y, LIU D, et al. EnlightenGAN: deep light enhancement without paired supervision[J]. IEEE Transactions on Image Processing, 2021, 30: 2340-2349. doi: 10.1109/tip.2021.3051462http://dx.doi.org/10.1109/tip.2021.3051462
GUO C L, LI C Y, GUO J C, et al. Zero-reference deep curve estimation for low-light image enhancement[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA, USA. IEEE, 2020: 1777-1786. doi: 10.1109/cvpr42600.2020.00185http://dx.doi.org/10.1109/cvpr42600.2020.00185
LI C Y, GUO C L, LOY C C. Learning to enhance low-light image via zero-reference deep curve estimation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(8): 4225-4238.
LIU R S, MA L, ZHANG J A, et al. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, TN, USA. IEEE, 2021: 10556-10565. doi: 10.1109/cvpr46437.2021.01042http://dx.doi.org/10.1109/cvpr46437.2021.01042
MA L, MA T Y, LIU R S, et al. Toward fast, flexible, and robust low-light image enhancement[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, LA, USA. IEEE, 2022: 5627-5636. doi: 10.1109/cvpr52688.2022.00555http://dx.doi.org/10.1109/cvpr52688.2022.00555
DING X H, ZHANG X Y, HAN J G, et al. Scaling up Your Kernels to 31 × 31: Revisiting large kernel design in CNNs[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, LA, USA. IEEE, 2022: 11953-11965. doi: 10.1109/cvpr52688.2022.01166http://dx.doi.org/10.1109/cvpr52688.2022.01166
WEI C, WANG W J, YANG W H, et al. Deep Retinex Decomposition for Low-Light Enhancement[EB/OL]. 2018: arXiv: 1808.04560. http://arxiv.org/abs/1808.04560http://arxiv.org/abs/1808.04560
BYCHKOVSKY V, PARIS S, CHAN E, et al. Learning photographic global tonal adjustment with a database of input/output image pairs[C]. CVPR. Colorado Springs, CO, USA. IEEE, 2011: 97-104. doi: 10.1109/cvpr.2011.5995413http://dx.doi.org/10.1109/cvpr.2011.5995413
ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA. IEEE, 2018: 586-595. doi: 10.1109/cvpr.2018.00068http://dx.doi.org/10.1109/cvpr.2018.00068
MITTAL A, SOUNDARARAJAN R, BOVIK A C. Making a “completely blind” image quality analyzer[J]. IEEE Signal Processing Letters, 2013, 20(3): 209-212. doi: 10.1109/lsp.2012.2227726http://dx.doi.org/10.1109/lsp.2012.2227726
0
Views
25
下载量
0
CSCD
Publicity Resources
Related Articles
Related Author
Related Institution