低照度图像增强算法研究综述

吕宗旺, 牛贺杰, 孙福艳, 甄彤

吕宗旺, 牛贺杰, 孙福艳, 甄彤. 低照度图像增强算法研究综述[J]. 红外技术, 2025, 47(2): 165-178.
引用本文: 吕宗旺, 牛贺杰, 孙福艳, 甄彤. 低照度图像增强算法研究综述[J]. 红外技术, 2025, 47(2): 165-178.
LYU Zongwang, NIU Hejie, SUN Fuyan, ZHEN Tong. Review of Research on Low-Light Image Enhancement Algorithms[J]. Infrared Technology , 2025, 47(2): 165-178.
Citation: LYU Zongwang, NIU Hejie, SUN Fuyan, ZHEN Tong. Review of Research on Low-Light Image Enhancement Algorithms[J]. Infrared Technology , 2025, 47(2): 165-178.

低照度图像增强算法研究综述

基金项目: 

国家重点研发计划项目 2022YFD2100202

详细信息
    通讯作者:

    吕宗旺(1979-),男,教授,研究方向:图像处理、粮食信息控制与处理。E-mail:zongwang_lv@126.com

  • 中图分类号: TP391

Review of Research on Low-Light Image Enhancement Algorithms

  • 摘要:

    低照度图像增强是图像处理领域的重要问题之一,近年来,深度学习技术的迅速发展为低照度图像增强提供了新的解决方案,且具有广阔的应用前景。首先,全面分析了低照度图像增强领域的研究现状与挑战,并介绍了传统方法及其优缺点。其次,重点讨论了基于深度学习的低照度图像增强算法,根据学习策略的不同将其分为五类,分别对这些算法的原理、网络结构、解决问题进行了详细的阐述,并按时间顺序将近6年基于深度学习的图像增强代表算法进行了对比分析。接着,归纳了当前主流的数据集与评价指标,并从感知相似度和算法性能两个方面对深度学习算法进行测试评估。最后,对低照度图像增强领域改进方向与今后研究作了总结与展望。

    Abstract:

    Low-light image enhancement is an important problem in the field of image processing. The rapid development of deep learning technology provides a new solution for low-light image enhancement and has broad application prospects. First, the current research status and challenges in the field of low-light image enhancement are comprehensively analyzed, and traditional methods and their advantages and disadvantages are introduced. Second, deep learning-based low-light image enhancement algorithms are classified into five categories according to their different learning strategies, and the principles, network structures, and problem-solving capabilities of these algorithms are explained in detail. Third, representative deep learning-based image enhancement algorithms from the last six years are compared and analyzed in chronological order. Fourth, the current mainstream datasets and evaluation indexes are summarized, and the deep learning algorithms are tested and evaluated in terms of perceived similarity and algorithm performance. Finally, directions for improvement and future research in the field of low-light image enhancement are discussed and suggested.

  • 滚动轴承作为机械设备中应用极其广泛的一种零部件,其种类繁多,结构各异。当设备轴承发生故障时,往往会造成经济损失,甚至给现场的设备操作人员带来人身安全威胁,因此对滚动轴承进行状态监测与故障诊断显得十分重要[1]。滚动轴承的状态检测与故障诊断技术历经多年的迭代更新发展,主要的技术手段包括振动信号、温度信号、声发射信号、电磁信号、超声信号、油样分析等[2],其中振动信号发展应用相对广泛。伴随着科学技术的不断发展以及新兴技术的创新融合,滚动轴承状态监测与故障诊断技术逐渐体现出自动化、智能化特点。

    红外热成像技术可以根据物体对外辐射强度进行具体成像,所得热图像不仅包括物体轮廓而且对于物体表面温度场分布可以进行直观表征[3]。当滚动轴承发生故障时,空间温度场分布也发生相应改变[4];因此,基于温度的轴承监测技术已被广泛应用,同样地通过轴承红外热图像变化也可以进行轴承状态判别[5]。近年来,国内外利用红外热成像技术进行滚动轴承状态判别的研究逐渐增多,该技术方法将轴承状态判别转换到了一个全新的视角,利用热图像特征丰富结合快速发展的图像处理技术实现轴承诊断监测,可以实现远距离、无损伤、高精度、智能化等优点,已逐渐发展成为轴承状态判别领域新的研究方向。

    本文首先对红外热成像技术基本原理进行介绍;然后,对国内外利用红外热成像进行滚动轴承状态监测与故障诊断的研究工作进行总结论述,最后对红外热成像滚动轴承状态检测与故障诊断研究趋势进行总结展望。

    物体表面温度超过绝对零度时,会向外界辐射电磁波,自然界中一切的物体都可以向外界进行红外辐射。根据温度的不同,电磁波的辐射强度与波长分布特性也存在差异[6]。电磁波按波长进行划分可分为不可见光线和可见光线,可见光线波长区间为0.4~0.76 μm,其余波长区间为不可见光线。物体所散发的红外辐射属于不可见光线中的0.76~1000 μm波段;按波长进行划分可分为0.76~3 μm的近红外、3~6 μm的中间红外、6~1000 μm的远红外[7]

    光谱图如图 1所示,大部份红外热像仪主要针对3~5 μm以及8~14 μm这两个波段进行检测[8],通过光学系统将采集光线进行过滤实现红外辐射聚焦,红外探测器将光学信号转化为电信号,然后生成对应的红外图像。红外探测器探测目标本身和背景之间的红外辐射插值计算并显示物体的表面温度分布[9]。此外,虽然红外辐射穿透传播能力较差,但是远红外传播时相对于近、中红外损失较少,更适合全天候、远距离传播[7]。红外热成像检测是以测量物体表面的红外线辐射能量为主;常见的红外热像仪包括制冷型和非制冷型[10]。制冷型响应速度快,探测距离远、测量精度高,价格相对较高,主要应用于军事领域。非制冷型红外热像仪随着红外焦平面探测器技术的发展进步,其检测速度和精度有了一定程度提升,在民用领域得到广泛应用[11]

    图  1  光谱图
    Figure  1.  Spectrogram

    对高速旋转轴承的性能和使用寿命进行评价时,温度被认为是最重要的影响参数[12-13]。为了研发出使用寿命长、工作性能稳定的轴承,研究工作者进行了轴承不同工况下的温度场分布研究,其中红外测温仪由于操作简单、快速便捷、可以大面积温度场监测被广泛使用[14]。进行测量时,对被测物体进行拍照,通过热图像灰度值对比可以确定图像中每个单一像素区域的温度值,进而可以实现多点的实时整体测温。

    侯新玉指出目前轴承温度测量采用多类型温度传感器,但是仅局限于单点测温,利用红外热成像可以实现轴承无接触的快速测温[15]。李艳超等人针对航空发动机轴承试验时轴承温度场分布这一最为重要的监视参数,将设计的红外测温系统测量温度与热电偶测量温度进行对比[12],结果表明红外测温系统测量误差可以满足相关技术要求,且能够推广应用于其它轴承温度测量。铁路领域中红外测温应用较为广泛且发展成熟的是红外轴温探测系统,通过红外探头在列车经过时对轴箱局部进行扫描,通过采集的温度信号判断轴箱轴承工作状态[16]。李彬彬等人在高速旋转轴承温度测量技术综述中指出轴承红外辐射测温法虽然能够实现非接触式温度动态特性监测,但是也容易受到背景以及环境中的红外辐射影响,导致温度测量结果与实际温度存在一定偏差,需要进行相应的结果补偿与修正[17]图 2为B6204型深沟球轴承运行速度为1000 rpm时,处于正常、润滑油不足、滚道表面脱落3种情况下的红外热图像[14]

    图  2  滚动轴承红外热图像
    Figure  2.  Rolling bearing infrared thermal images

    滚动轴承隶属于旋转机械[18],因此滚动轴承红外热成像故障诊断与状态监测可以划分为旋转机械红外智能状态监测与故障诊断。旋转机械红外智能状态监测与故障诊断技术流程主要包括:图像采集、图像预处理增强、图像特征参数提取、故障特征分类[19-20]。通过红外热像仪采集滚动轴承原始热图像之后需进行预处理增强,使得图像信噪比降低,故障特征更加明显。预处理增强后的图像包含了滚动轴承的状态特征,提取更具代表性的图像特征参数可以训练得到性能参数更佳的分类器,实现更高的分类诊断精度。对应不同的图像特征,对其进行准确分类,即可判断出滚动轴承当前的工作状态。图 3即为一个完整的利用红外热成像进行滚动轴承状态判别的技术流程图[21]。首先红外热像仪采集滚动轴承红外热图像并制作图像数据集,之后采用二维离散小波变换(2-dimensional discrete wavelet transform, 2D-DWT)进行预处理增强,其次进行特征提取降维,最后对其进行故障分类及结果验证。

    图  3  滚动轴承红外热成像状态判别
    Figure  3.  Rolling bearing infrared thermal imaging status discrimination

    1)图像采集与预处理增强

    在实际工程应用中所得热图像视场中不仅包括轴承,而且包括电机、齿轮、轴承支架等其他多种目标,如果需要对多个单一目标进行状态判别,则需要对图像进行裁剪、切割才能进行后续处理[22]。王洋等人对于这一问题提出了较好方案,利用深度学习目标检测算法,通过训练的模型可以对红外热图像视野中的多类别目标进行识别分类、定位裁剪[18],将红外热图像中的滚动轴承快速自动识别裁剪,可以为后续滚动轴承红外智能状态监测与故障诊断提供了较大便利。孙富成、Van Tung Tran等利用二维经验模态分解(bi-dimensional empirical mode decomposition, BEMD)将轴承红外热图像分解成本征模式函数(intrinsic mode function, IMF),利用主成分分析法(principal component analysis, PCA)进行降维后再进行图像融合[22-23],从而实现图像预处理增强。杨二斌等人将列车转向架可见光图像与红外热图像通过仿射变换的方法进行融合后用于后期的轴箱轴承故障诊断与模型训练[24]。Deila-msalehy H.等人由于直接采用霍夫变换(Hough transform, HF)检测红外热图像中椭圆计算速度缓慢,于是先采用Canny边缘检测器检测出图像边缘,然后再采用HF从而加快图像中车轮椭圆区域、轴箱轴承区域检测[25]。Anurag Choudhary等人利用二维离散小波变换对轴承热图像进行分解,然后选择所需的小波系数作为特征提取阶段的输入;文献[5, 20-21, 26-28]中也采用二维离散小波变换在图像预处理阶段进行滚动轴承红外热图像预处理增强。

    2)图像特征参数提取

    图像特征包括直方图、颜色、纹理等多种参数[29]。图像处理技术中,卷积神经网络(convolutional neural network, CNN)受到了工程师的广泛赞誉,其主要原因是CNN具有强大的自动特征提取能力,在滚动轴承红外热成像故障诊断中能实现较好效果[5, 30]。直方图能够较好表征图像信息和特点,因此被广泛应用,Deila-msalehy H.等人在判断热图像中车轮和轴箱轴承状态时提取了图像的方向梯度直方图特征(histogram of oriented gradient, HOG特征)[25]。Lixiang Duan等人针对旋转机械红外图像信息不明显、噪声强问题,采用离散度区域选择准则,提出了一种新的图像分割方法来增强红外图像分析中的特征提取[31]。红外热成像(infrared thermography, IRT)和传统的振动信号对旋转机械进行故障诊断的精度相比较而言,IRT最初对旋转机械的诊断精度不能超过振动信号。对于这一现象,ZHEN JIA等人指出旋转机械的红外智能诊断精度高度依赖于红外热图像特征选择,传统模式识别在特征选取阶段人为设定了需要提取的特征,特征选择不当往往导致不能达到较高的诊断精度[32],因此他们采用视觉词袋(bag-of-visual word, BoVW)和CNN这两种流行的图像特征提取方法提取特征后利用支持向量机(support vector machines, SVM)进行分类,结果中IRT诊断精度高于传统的振动信号方法。Yongbo Li等人后续利用BoVW从IRT图像中提取故障特征采用SVM进行分类,也实现了较好效果[33]。Tauheed Mian等人在轴承双重及多重故障条件下,将振动信号和非侵入式红外热成像的轴承故障诊断精度进行了比较,振动信号在双重和多重故障条件下都实现了足够的准确性,范围为99.39%~99.97%,非侵入式红外热成像利用CNN进行自动特征提取分类,分类准确性达到了100%效果[5]。对图像进行特征提取后,往往会获得维数较高的特征矢量,如果将较高维数的特征矢量输入映射分类函数,往往会引发分类函数的过度拟合;同时特征矢量各个维度之间存在较高的相关性;在尽可能减少信息损失的前提下,对数据进行降维,不仅可以保证分类精度而且可以加快分类速度;常见的降维方法有PCA,又称K-L变化、遗传算法以及线性判别分析(linear discriminant analysis, LDA)等也可用于数据降维[21, 27-28, 32]

    3)故障特征分类

    对于滚动轴承红外热图像的故障特征分类,与利用普通图像进行识别预测的特征分类方法并无较大差异。在分类问题中机器学习备受青睐,经由最初的浅层学习到如今迅速发展的深度学习,机器学习逐渐在滚动轴承红外热成像故障诊断与状态监测中发挥越来越重要的作用。在有监督学习中,二分类广义线性分类器支持向量机被广泛应用于滚动轴承红外热图像故障特征分类[20-21, 25-26, 28, 32-33]。此外,朴素贝叶斯分类器(Naive Bayes)、k近邻分类算法(k-nearest neighbor, KNN)、线性判别分析、复杂决策树(complex decision tree, CDT)在红外热成像滚动轴承故障特征分类中也有所应用[20, 32]。非监督学习中聚类分析的典型代表k-means算法在红外热成像轴承特征分类中也能实现较好分类效果[22]。Anurag Choudhary等人采用马氏距离(Mahalanobis distance, MD)准则得到最优特征集后,将特征分别传递给复杂决策树、线性判别分析和支持向量机进行故障分类和性能评价[21],结果表明,SVM优于CDT和LDA。Ankush Mehta等人利用二维离散小波变换对轴承红外热图像分解融合且利用主成分分析降维后,采用支持向量机、线性判别分析和KNN作为分类器进行故障分类和性能评价[28];结果表明,SVM优于LDA和KNN。CNN是具有深度结构的前馈神经网络,是深度学习的代表算法之一,按照生物的视觉机制进行构建,可以进行监督学习和非监督学习;Kunal Sharma等人对滚动轴承在内圈、外圈、以及滚动体故障下的红外热图像进行特征提取后,利用支持向量机SVM和使用AlexNet架构的卷积神经网络进行分类性能对比,结果证明卷积神经网络分类性能更佳[5, 34]

    针对滚动轴承红外热成像故障诊断与状态监测相关文献进行调研,对该技术方法中图像采集与预处理增强、图像特征参数提取与分类各个环节所采用方法进行对比分析,可以发现:相较于传统图像裁剪切割以及滤波降噪等预处理增强方法而言,基于深度学习的目标检测方法对图像要求较低,通过卷积计算可以快速、准确地发现图像视野中的滚动轴承及其他类别物体,甚至可以直接判断出红外热图像中的滚动轴承状态,具有较大的发展前景和优势,但是存在的问题是分类性能良好的检测模型获取需要大量不同类型的图像样本数据进行训练。通过可见光图像与红外图像进行融合可以极大弥补滚动轴承红外热图像边界轮廓模糊的先天缺陷,可以得到高质量红外热图像,但是两种图像的有效融合方法也是需要积极探索的关键问题。而霍夫变换、二维经验模态分解、二维离散小波变换等图像处理方法,图像诊断步骤相对复杂,需要占用较多计算资源,且对于大尺寸图像效率不高、兼容适应性不强。图像特征参数提取时,数据量较大且相关性较高的高维特征参数往往会导致计算缓慢且产生分类模型过度拟合,需要对其进行适当降维并选择具有代表性的典型特征进行后续分类,特征参数是否有效选取一定程度上作为关键因素限制了后续的最终分类诊断精度。卷积神经网络具有强大的图像特征提取能力,通过多次上、下采样,以及特征空间金字塔的植入,可以对图像特征进行全面、有效的提取,实现较高的分类精度。方向梯度直方图(HOG特征)对于大尺寸图像而言,特征数据会呈现指数级增长,需占用较多计算存储资源,且HOG特征对于图像轮廓边缘敏感,对于图像小范围的灰度亮暗变化不能很好适应。在进行滚动轴承故障特征分类时,支持向量机、线性判别分析、k近邻分类算法等方法,分类诊断精度受限于上一阶段图像特征是否有效提取;但是卷积神经网络就图像特征提取、识别分类体现出巨大优势,通过特征池化连接至全连接层,以及网络结构的不断优化改进,可以实现较好分类效果,具有广阔的应用发展前景。K-means算法作为无监督学习聚类分析中的典型代表,在进行滚动轴承状态判别时,不同的K类别划分将会达到不同的诊断精度,寻求最优K值将会实现相对较好的分类效果。视觉词袋法通过提取图像特征后结合聚类分析建立训练集、测试集词袋,提取特征向量后对分类模型进行训练、验证,可以实现较高的分类诊断精度,并且相较于有监督学习避免了对大量图像进行耗时的标签标记工作。

    利用红外热成像对滚动轴承进行故障诊断与状态监测的技术方法,相较于传统的振动信号、温度信号等,红外热图像的诊断方法结合了红外热成像技术以及图像处理技术,将滚动轴承的状态判别转换到了相对新颖的技术视角。具有远距离。无接触、高效便捷、泛化适应性强的技术优势。初始时期,利用红外热成像技术进行滚动轴承故障诊断的诊断精度低于应用广泛的振动信号,伴随着CNN、2D-DWT、BEMD以及视觉词袋等多种图像处理技术在图像预处理增强、图像特征提取、分类环节的应用,利用红外热成像进行滚动轴承故障诊断的诊断精度已经高于振动信号。与此同时伴随着红外热像仪分辨率的不断提高、响应速度逐渐加快,高质量的红外热图像将为后续的状态监测、故障特征分类带来便利,进而实现更高的诊断精度。此外,红外热图像的高噪声特点为滚动轴承红外热图像诊断带来了困扰,对其进行有效地滤波降噪,才能使图像特征更加明显。红外测量的滚动轴承温度与实际温度存在的测量误差也不可忽视,需要对其进行必要的补偿校正。伴随着人工智能、图像处理技术的快速发展,机器学习、深度学习正在红外热成像滚动轴故障诊断与状态监测中发挥重要作用,将能够实现更高精度的、更加智能的滚动轴承状态判别。

  • 图  1   传统低照度图像增强代表方法总结

    Figure  1.   Summary of traditional low-light image enhancement representation methods

    图  2   基于深度学习的低照度图像增强代表方法

    Figure  2.   Summary of typical methods for low-light image enhancement based on deep learning

    图  3   TBEFN模型分支结构

    Figure  3.   TBEFN model branch structure

    图  4   EnlightenGAN模型结构

    Figure  4.   EnlightenGAN model structure

    图  5   RA3C模型结构

    Figure  5.   RA3C model structure

    图  6   RRDNet模型主干结构

    Figure  6.   RRDNet model backbone structure

    图  7   DRBN模型结构

    Figure  7.   DRBN model structure

    表  1   传统低照度图像增强算法对比

    Table  1   Comparison of conventional low-light image enhancement algorithms

    Algorithms Key technology Advantage Disadvantage
    Histogram Equalization[2-6] Changing the histogram distribution of image grey scale intervals Enhanced brightness distribution and sharpness of images Grey scale merge, detail information lost
    Tone Mapping Algorithm[7-9] Convert high dynamic range images to low dynamic range images Improved image brightness and colour performance Limitations in image display quality
    Fusion-based Algorithm[10-12] Fusion of multiple image processing results Improved brightness and clarity has advantages Generates fusion artefacts and introduces image distortion
    Defogging-based Algorithm[13-16] Improve low-light image quality by removing haze effect from images Restore image detail and improve visual quality Partial loss of information, introduction of artefacts, noise
    Retinex Theory[17-20] Decomposition of the image into reflective and illuminated parts Improved colour balance and detail in images Complex calculations, more light-dependent
    下载: 导出CSV

    表  2   基于深度学习的低照度图像增强方法总结

    Table  2   Summary of deep learning based low illumination image enhancement methods

    Year Methodology Improvement Advantage Disadvantage
    2019 EnlightenGAN[26] Attention-guided U-Net, an image-enhanced GAN network Processes and generates highly realistic multi-type images Introduces artefacts, amplifies noise; not stable enough
    ExcNet[31] Excludes backlighting and automatically adjusts brightness and contrast Automatically restores detail to backlit images; no need for extensive annotation data Subject to scene limitations, complex scenes may be affected; long training time
    2020 Zero-DCE[32] Introduction of contrast loss function and feature network DCE-Net No need for a reference image; enhance the image while retaining detailed information Requires high input image quality; high training costs
    RRDNet[34] Retinex decomposes residual and salient images Zero sample learning; image details and color saturation are well preserved Relies on accurate reflection and light decomposition; requires high image quality
    DRBN[37] Recursive banding ensures detail recovery; band reconstruction enhances image Good reconstruction of detail enhancement results using paired and unpaired data training Simple experimental setup, not conducted on wider dataset
    2021 Zero-DCE++[33] Accelerated and lightweight version of Zero-DCE Fast inference speed while maintaining performance Enhanced images appear over-enhanced or distorted
    TBEFN[25] Two-branch structure; adaptive residual block and attention module Handles exposure fusion and image enhancement simultaneously Requires large amount of labelled data; not effective in extreme lighting conditions
    RUAS[38] Constructing lightweight image enhancement based on Retinex theory Fast and requires few computational resources; very effective Performance and computational efficiency need further improvement
    2022 LEDNet[39] Combined low light enhancement and dark de-blurring Handles both low light and blur; new dataset LOL-Blur The dataset does not contain other low-light environments and shooting scenes
    LEES-Net[27] Attention to mechanisms for positioning and dynamic adjustment Good generalization ability, robustness and visual effects Long training time and lack of real-time capability; artifacts and oversmoothing issues
    D2HNet[40] Joint denoising and deblurring using hierarchical networks High-quality image restoration for short and long duration shots Training data quality needs to be improved
    2023 Literature[41] Introduction of a new framework for appearance and structural modelling Structural features guide appearance enhancements, producing lifelike results Simple model structure; no real-time enhancement
    Noise2Code[42] Projection based image denoising network algorithm Implementing joint training of denoising networks and VQGAN models Applications have some limitations; adaptability to complex degradation models
    NeRCo[28] Control of adaptable fitting functions to unify scene degradation factors Semantic Oriented Supervision for Improving Perceptual Friendliness and Robustness A priori limitations in text images; validity not verified in a wider range of scenarios
    DecNet[43] Decomposition and Tuning Networks and Self-Supervised Fine-Tuning Strategies Efficient and lightweight network with no manual tuning for good performance Degradation problems in images taken in real low-light conditions
    2024 Multi-Channel Retinex[44] Multi-Channel Retinex Image Enhancement Network, Design Initialization Module Split channel enhancement is used to solve the problem of color deviation Higher model complexity, requires paired data training
    Retinexmamba[45] Integration of Retinexformer learning framework and introduction of light fusion state space models Improve the processing speed by using state space model and maintain the image quality during enhancement Lack of some non-reference image quality evaluation metrics
    LYT-NET[46] YUV separates luminance and chrominance and introduces a blending loss function Simplified light-color information separation reduces model complexity Model generalization ability to be verified, limited comparison methods
    下载: 导出CSV

    表  3   主流低照度图像增强数据集总结

    Table  3   Comparison of conventional low-light image enhancement algorithms

    Dataset No. of images Image formats Paired/unpaired Real/Synthetic
    SICE[47] 4413 RGB Paired Real
    LOL[48] 500 RGB Paired Real
    VE-LOL[49] 2500 RGB Paired Real+ Synthetic
    MIT-Adobe FiveK[50] 5000 RAW Paired Real
    SID[51] 5094 RAW Paired Real
    SMOID[52] 179 RAW Paired Real
    LIME[53] 10 RGB unpaired Real
    ExDARK[54] 7363 RGB unpaired Real
    下载: 导出CSV

    表  4   基于深度学习的低照度图像增强方法性能对比

    Table  4   Performance comparison of deep learning based low illumination image enhancement methods

    Year Methodology Network framework Dataset Image formats Evaluation metrics Operational framework PSNR/dB
    2019 EnlightenGAN[26] U-Net NPE/MEF/LIME/DICM etc. RGB NIQE PyTorch 17.48
    ExcNet[31] CNN IEpsD RGB CDIQA/LOD PyTorch 17.25
    2020 Zero-DCE[32] U-Net type network SICE RGB PSNR/SSIM/MAE PyTorch 14.86
    RRDNet[34] Retinex Breakdown NPE/MEF/LIME/DICM etc. RGB NIQE/CPCQI PyTorch 14.38
    DRBN[37] Recursive network LOL RGB PSNR/SSIM PyTorch 20.13
    2021 Zero-DCE++[33] U-Net type network SICE RGB PSNR/SSIM/MAE etc. PyTorch 16.42
    TBEFN[25] U-Net type network LOL RGB PSNR/SSIM/NIQE TensorFlow 17.14
    RUAS[38] Retinex Breakdown LOL/MIT-AdobeFiveK RGB PSNR/SSIM/LPIPS PyTorch 20.6
    2022 LEDNet[39] Neural network LOL-Blur RGB PSNR/SSIM/LPIPS PyTorch 23.86
    LEES-Net[27] CNN LOL-v2/LSRW Dark Face RGB PSNR/SSIM/LPIPS/LOE PyTorch 20.2
    D2HNet[40] Pyramid network D2(synthetic dataset) RGB PSNR/SSIM/PR PyTorch 26.67
    2023 Literature [41] End-to-end network LOL/SID RGB PSNR/SSIM PyTorch 24.62
    Noise2Code[42] GAN model SIDD/DND RGB PSNR/SSIM PyTorch
    NeRCo[28] Implicit network LOL/LIME/LSRW RGB PSNR/SSIM/NIQE/LOE PyTorch 19.84
    DecNet[43] Retinex Breakdown LOL/NPE/MIT-AdobeFiveK RGB PSNR/SSIM/NIQE/LOE PyTorch 22.82
    2024 Multi-Channel Retinex[44] Retinex Breakdown LOL/MIT-AdobeFiveK RGB PSNR/SSIM FSIM PyTorch 21.94
    Retinexmamba[45] Retinex+Mamba LOL-v1 LOLv2-real RGB PSNR/SSIM/RMSE PyTorch 22.453
    LYT-NET[46] YUV Transformer LOLv1/LOLv2-real/LOL-v2-syn YUV PSNR/SSIM Tensorflow 22.38
    下载: 导出CSV
  • [1]

    LIU J, XU D, YANG W, et al. Benchmarking low-light image enhancement and beyond[J]. International Journal of Computer Vision, 2021, 129: 1153-1184. DOI: 10.1007/s11263-020-01418-8

    [2] 郭永坤, 朱彦陈, 刘莉萍, 等. 空频域图像增强方法研究综述[J]. 计算机工程与应用, 2022, 58(11): 23-32.

    GUO Y K, ZHU Y C, LIU L P, et al. A review of research on image enhancement methods in the air-frequency domain[J]. Computer Engineering and Application, 2022, 58(11): 23-32.

    [3]

    Jebadass J R, Balasubramaniam P. Low contrast enhancement technique for color images using intervalvalued intuitionistic fuzzy sets with contrast limited adaptive histogram equalization[J]. Soft Computing, 2022, 26(10): 4949-4960. DOI: 10.1007/s00500-021-06539-x

    [4] 杨嘉能, 李华, 田宸玮, 等. 基于自适应校正的动态直方图均衡算法[J]. 计算机工程与设计, 2021, 42(5): 1264-1270.

    YANG J N, LI H, TIAN C W, et al. Dynamic histogram equalization algorithm based on adaptive correction[J]. Computer Engineering and Design, 2021, 42(5): 1264-1270.

    [5]

    KUO C F J, WU H C. Gaussian probability bi‐histogram equalization for enhancement of the pathological features in medical images[J]. International Journal of Imaging Systems and Technology, 2019, 29(2): 132-145. DOI: 10.1002/ima.22307

    [6]

    LI C, LIU J, ZHU J, et al. Mine image enhancement using adaptive bilateral gamma adjustment and double plateaus histogram equalization[J]. Multimedia Tools and Applications, 2022, 81(9): 12643-12660. DOI: 10.1007/s11042-022-12407-z

    [7]

    Nguyen N H, Vo T V, Lee C. Human visual system model-based optimized tone mapping of high dynamic range images[J]. IEEE Access, 2021, 9: 127343-127355. DOI: 10.1109/ACCESS.2021.3112046

    [8] 陈迎春. 基于色调映射的快速低照度图像增强[J]. 计算机工程与应用, 2020, 56(9): 234-239.

    CHEN Y C. Fast low-light image enhancement based on tone mapping[J]. Computer Engineering and Applications, 2020, 56(9): 234-239.

    [9] 赵海法, 朱荣, 杜长青, 等. 全局色调映射和局部对比度处理相结合的图像增强算法[J]. 武汉大学学报, 2020, 66(6): 597-604.

    ZHAO H F, ZHU R, DU C Q, et al. An image enhancement algorithm combining global tone mapping and local contrast processing[J]. Journal of Wuhan University, 2020, 66(6): 597-604.

    [10] 李明悦, 晏涛, 井花花, 等. 多尺度特征融合的低照度光场图像增强算法[J]. 计算机科学与探索, 2022, 17(8): 1904-1916.

    LI M Y, YAN T, JING H H, et al. Multi-scale feature fusion algorithm for low illumination light field image enhancement[J]. Computer Science and Exploration, 2022, 17(8): 1904-1916.

    [11] 张微微. 基于图像融合的低照度水下图像增强[D]. 大连: 大连海洋大学, 2023.

    ZHANG W W. Low Illumination Underwater Image Enhancement Based on Image Fusion[D]. Dalian: Dalian Ocean University, 2023.

    [12] 田子建, 吴佳奇, 张文琪, 等. 基于Transformer和自适应特征融合的矿井低照度图像亮度提升和细节增强方法[J]. 煤炭科学技术, 2024, 52(1): 297-310.

    TIAN Z J, WU J Q, ZHANG W Q, et al. Brightness enhancement and detail enhancement method for low illumination images of mines based on Transformer and adaptive feature fusion[J]. Coal Science and Technology, 2024, 52(1): 297-310.

    [13]

    DONG X, PANG Y, WEN J, et al. Fast efficient algorithm for enhancement of low lighting video[C]//2011 IEEE International Conference on Multimedia and Expo, 2011: 1-6.

    [14]

    HUO Y S. Polarization-based research on a priori defogging of dark channel[J]. Acta Physica Sinica, 2022, 71(14): 144202. DOI: 10.7498/aps.71.20220332

    [15]

    HONG S, KIM M, LEE H, et al. Nighttime single image dehazing based on the structural patch decomposition[J]. IEEE Access, 2021, 9: 82070-82082. DOI: 10.1109/ACCESS.2021.3086191

    [16]

    SI Y, YANG F, CHONG N. A novel method for single nighttime image haze removal based on gray space[J]. Multimedia Tools and Applications, 2022, 81(30): 43467-43484. DOI: 10.1007/s11042-022-13237-9

    [17]

    LAND E H. The retinex theory of color vision[J]. Scientific American, 1977, 237(6): 108-129. DOI: 10.1038/scientificamerican1277-108

    [18]

    WANG R, ZHANG Q, FU C W, et al. Underexposed photo enhancement using deep illumination estimation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 6849-6857.

    [19]

    CAI Y, BIAN H, LIN J, et al. Retinexformer: one-stage retinex-based transformer for low-light image enhancement[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 12504-12513.

    [20]

    REN X, YANG W, CHENG W H, et al. LR3M: robust low-light enhancement via low-rank regularized retinex model[J]. IEEE Transactions on Image Processing, 2020, 29: 5862-5876. DOI: 10.1109/TIP.2020.2984098

    [21]

    Lore K G, Akintayo A, Sarkar S. LLNet: a deep autoencoder approach to natural low-light image enhancement[J]. Pattern Recognition, 2017, 61: 650-662. DOI: 10.1016/j.patcog.2016.06.008

    [22]

    ZHANG Y, ZHANG J, GUO X. Kindling the darkness: a practical low-light image enhancer[C]//Proceedings of the 27th ACM International Conference on Multimedia, 2019: 1632-1640.

    [23]

    ZHANG Y, GUO X, MA J, et al. Beyond brightening low-light images[J]. International Journal of Computer Vision, 2021: 1013-1037.

    [24]

    LI C, GUO J, PORIKLI F, et al. LightenNet: a convolutional neural network for weakly illuminated image enhancement[J]. Pattern Recognition Letters, 2018, 104: 15-22. DOI: 10.1016/j.patrec.2018.01.010

    [25]

    LU K, ZHANG L. TBEFN: a two-branch exposure-fusion network for low-light image enhancement[J]. IEEE Transactions on Multimedia, 2021, 23: 4093-4105. DOI: 10.1109/TMM.2020.3037526

    [26]

    JIANG Y, GONG X, LIU D, et al. EnlightenGAN: deep light enhancement without paired supervision[J]. IEEE Transactions on Image Processing, 2021, 30: 2340-2349. DOI: 10.1109/TIP.2021.3051462

    [27]

    LI X, HE R, WU J, et al. LEES-Net: fast, lightweight unsupervised curve estimation network for low-light image enhancement and exposure suppression[J]. Displays, 2023, 80: 102550. DOI: 10.1016/j.displa.2023.102550

    [28]

    YANG S, DING M, WU Y, et al. Implicit neural representation for cooperative low-light image enhancement[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 12918-12927.

    [29]

    YU R, LIU W, ZHANG Y, et al. Deepexposure: learning to expose photos with asynchronously reinforced adversarial learning[J]. Advances in Neural Information Processing Systems, 2018, 31: 7429-7439.

    [30] 周腾威. 基于深度学习的图像增强算法研究[D]. 南京: 南京信息工程大学, 2021.

    ZHOU T W. Research on Image Enhancement Algorithm Based on Deep Learning[D]. Nanjing: Nanjing University of Information Engineering, 2021.

    [31]

    ZHANG L, ZHANG L, LIU X, et al. Zero-shot restoration of back-lit images using deep internal learning[C]//Proceedings of the 27th ACM International Conference on Multimedia, 2019: 1623-1631.

    [32]

    GUO C, LI C, GUO J, et al. Zero-reference deep curve estimation for low-light image enhancement[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 1780-1789.

    [33]

    LI C, GUO C, CHEN C L. Learning to enhance low-light image via zero-reference deep curve estimation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 44(8): 4225-4238.

    [34]

    ZHU A, ZHANG L, SHEN Y, et al. Zero-shot restoration of underexposed images via robust retinex decomposition[C]//2020 IEEE International Conference on Multimedia and Expo (ICME), 2020, DOI: 10.1109/ICME46284.2020.9102962.

    [35]

    SOHN K, BERTHELOT D, CARLINI N, et al. Fixmatch: simplifying semi-supervised learning with consistency and confidence[J]. Advances in Neural Information Processing Systems, 2020, 33: 596-608.

    [36]

    LIU Y, TIAN Y, CHEN Y, et al. Perturbed and strict mean teachers for semi-supervised semantic segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 4258-4267.

    [37]

    YANG W, WANG S, FANG Y, et al. From fidelity to perceptual quality: a semi-supervised approach for low-light image enhancement [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 3063-3072.

    [38]

    LIU R, MA L, ZHANG J, et al. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 10561-10570.

    [39]

    ZHOU S, LI C, CHANGE LOY C. LEDNet: joint low-light enhancement and deblurring in the dark[C]//European Conference on Computer Vision, 2022: 573-589.

    [40]

    ZHAO Y, XU Y, YAN Q, et al. D2hnet: Joint denoising and deblurring with hierarchical network for robust night image restoration [C]//European Conference on Computer Vision, 2022: 91-110.

    [41]

    XU X, WANG R, LU J. Low-Light Image Enhancement via Structure Modeling and Guidance[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 9893-9903.

    [42]

    Cheikh Sidiyadiya A. Generative prior for unsupervised image restoration[D]. Ahmed Cheikh Sidiya: West Virginia University, 2023.

    [43]

    LIU X, XIE Q, ZHAO Q, et al. Low-light image enhancement by retinex-based algorithm unrolling and adjustment[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 35(11): 2162-2388.

    [44] 张箴, 鹿阳, 苏奕铭, 等. 基于多通道Retinex模型的低照度图像增强网络[J]. 信息与控制, 2024, 53(5): 652-661.

    ZHANG Z, LU Y, SU Y M, et al. Low-light image enhancement network based on multi-channel Retinex model[J]. Information and Control, 2024, 53(5): 652-661.

    [45]

    BAI J, YIN Y, HE Q. Retinexmamba: retinex-based mamba for low-light image enhancement[J]. arXiv preprint arXiv: 2405.03349, 2024.

    [46]

    Brateanu A, Balmez R, Avram A, et al. Lyt-net: lightweight yuv transformer-based network for low-light image enhancement[J]. arXiv preprint arXiv: 2401.15204, 2024.

    [47]

    CAI J, GU S, ZHANG L. Learning a deep single image contrast enhancer from multi-exposure images[J]. IEEE Transactions on Image Processing, 2018, 27(4): 2049-2062.

    [48]

    WEI C, WANG W, YANG W, et al. Deep retinex decomposition for low-light enhancement[J]. arXiv preprint arXiv: 1808.04560, 2018.

    [49]

    LIU J, XU D, YANG W, et al. Benchmarking low-light image enhancement and beyond[J]. International Journal of Computer Vision, 2021, 129: 1153-1184.

    [50]

    Bychkovsky V, Paris S, CHAN E, et al. Learning photographic global tonal adjustment with a database of input/output image pairs[C]//CVPR of IEEE, 2011: 97-104.

    [51]

    CHEN C, CHEN Q, XU J, et al. Learning to see in the dark[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 3291-3300.

    [52]

    JIANG H, ZHENG Y. Learning to see moving objects in the dark[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 7324-7333.

    [53]

    GUO X, LI Y, LING H. LIME: Low-light image enhancement via Illumination Map Estimation[J]. IEEE Transactions on Image Processing, 2017, 26(2): 982-993.

    [54]

    LOH Y P, CHAN C S. Getting to know low-light images with the exclusively dark dataset[J]. Computer Vision and Image Understanding, 2019, 178: 30-42.

    [55]

    SARA U, AKTER M, UDDIN M S. Image quality assessment through FSIM, SSIM, MSE and PSNR——a comparative study[J]. Journal of Computer and Communications, 2019, 7(3): 8-18.

    [56]

    Mittal A, Soundararajan R, Bovik A C. Making a "Completely Blind" image quality analyzer[J]. IEEE Signal Processing Letters, 2013, 20(3): 209-212.

    [57]

    ZHANG R, Isola P, Efros A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 586-595.

    [58]

    HU S, YAN J, DENG D. Contextual information aided generative adversarial network for low-light image enhancement[J]. Electronics, 2021, 11(1): 32.

    [59]

    YANG S, ZHOU D, CAO J, et al. Rethinking low-light enhancement via transformer-GAN[J]. IEEE Signal Processing Letters, 2022, 29: 1082-1086.

    [60]

    PAN Z, YUAN F, LEI J, et al. MIEGAN: mobile image enhancement via a multi-module cascade neural network[J]. IEEE Transactions on Multimedia, 2022, 24: 519-533.

    [61]

    CHEN X, LI J, HUA Z. Retinex low-light image enhancement network based on attention mechanism[J]. Multimedia Tools and Applications, 2023, 82(3): 4235-4255.

    [62]

    ZHANG Q, ZOU C, SHAO M, et al. A single-stage unsupervised denoising low-illumination enhancement network based on swin-transformer[J]. IEEE Access, 2023, 11: 75696-75706.

    [63]

    YE J, FU C, CAO Z, et al. Tracker meets night: a transformer enhancer for UAV tracking[J]. IEEE Robotics and Automation Letters, 2022, 7(2): 3866-3873.

    [64]

    Kanev A, Nazarov M, Uskov D, et al. Research of different neural network architectures for audio and video denoising[C]//2023 5th International Youth Conference on Radio Electronics, Electrical and Power Engineering (REEPE) of IEEE, 2023, 5: 1-5.

    [65]

    FENG X, LI J, HUA Z. Low-light image enhancement algorithm based on an atmospheric physical model[J]. Multimedia Tools and Applications, 2020, 79(43): 32973-32997.

    [66]

    JIA D, YANG J. A multi-scale image enhancement algorithm based on deep learning and illumination compensation[J]. Traitement du Signal, 2022, 39(1): 179-185.

  • 期刊类型引用(4)

    1. 谢承辉,王星,肖发厚,彭冬春,梅燕,刘鹏鹏. 轴承故障模式与故障诊断方法综述. 计算机测量与控制. 2025(04): 1-9 . 百度学术
    2. 罗毅,黄毅文. 电气自动化系统中状态监测与故障诊断的实现方法研究. 中国新技术新产品. 2024(06): 31-34 . 百度学术
    3. 马建,王建平,孟德安,颜黎明,郭钿祥. 永磁同步电机故障诊断方法研究综述. 电力工程技术. 2024(04): 104-115 . 百度学术
    4. 胡义宏,王坤. 基于改进PSO-Otsu的电气设备红外图像分割算法. 现代信息科技. 2024(15): 55-59 . 百度学术

    其他类型引用(4)

图(7)  /  表(4)
计量
  • 文章访问数:  97
  • HTML全文浏览量:  17
  • PDF下载量:  43
  • 被引次数: 8
出版历程
  • 收稿日期:  2024-08-06
  • 修回日期:  2024-08-26
  • 刊出日期:  2025-02-19

目录

/

返回文章
返回