留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

深度学习偏振图像融合研究现状

段锦 张昊 宋靖远 刘举

段锦, 张昊, 宋靖远, 刘举. 深度学习偏振图像融合研究现状[J]. 红外技术, 2024, 46(2): 119-128.
引用本文: 段锦, 张昊, 宋靖远, 刘举. 深度学习偏振图像融合研究现状[J]. 红外技术, 2024, 46(2): 119-128.
DUAN Jin, ZHANG Hao, SONG Jingyuan, LIU Ju. Review of Polarization Image Fusion Based on Deep Learning[J]. Infrared Technology , 2024, 46(2): 119-128.
Citation: DUAN Jin, ZHANG Hao, SONG Jingyuan, LIU Ju. Review of Polarization Image Fusion Based on Deep Learning[J]. Infrared Technology , 2024, 46(2): 119-128.

深度学习偏振图像融合研究现状

基金项目: 

吉林省科技发展计划项目 20220508152RC

吉林省产业技术研究与开发项目 2023C031-3

重庆自然科学基金 cstc2021jcyj-msxmX0145

国家自然科学基金重大仪器专项 62127813

详细信息
    作者简介:

    段锦(1971-),男,教授,博士生导师,从事模式识别、图像处理、机器视觉研究。E-mail: duanjin@vip.sina.com

  • 中图分类号: TP391.41

Review of Polarization Image Fusion Based on Deep Learning

  • 摘要: 偏振图像融合旨在通过光谱信息和偏振信息的结合改善图像整体质量,在图像增强、空间遥感、目标识别和军事国防等领域具有广泛应用。本文在回顾基于多尺度变换、稀疏表示和伪彩色等传统融合方法基础上,重点介绍基于深度学习的偏振图像融合方法研究现状。首先阐述基于卷积神经网络和生成对抗网络的偏振图像融合研究进展,然后给出在目标检测、语义分割、图像去雾和三维重建领域的相关应用,同时整理公开的高质量偏振图像数据集,最后对未来研究进行展望。
  • 图  1  基于SR的不同场景融合结果

    Figure  1.  SR-based fusion results for different scenes

    图  2  基于PCNN的不同场景融合结果

    Figure  2.  PCNN-based fusion results for different scenes

    图  3  PFNet网络架构

    Figure  3.  The network architecture of PFNet

    图  4  融合子网架构

    Figure  4.  The architecture of fusion sub-network

    图  5  目标检测结果

    Figure  5.  Results of object detection

    图  6  语义分割结果

    Figure  6.  Results of semantic segmentation

    表  1  传统偏振图像融合方法对比

    Table  1.   Comparison of traditional polarization image fusion methods

    Methods Specificities Advantages Shortcomings
    Multi-scale transformation Important visual information can be extracted at different scales and provide better spatial and frequency resolution. It is capable of extracting multi-scale details and structural information to effectively improve the quality of fused images. The determination of decomposition levels and the selection of fusion rules usually depend on manual experience.
    Sparse representation The method uses a linear subspace representation of training samples and is suitable for approximating similar objects. The method captures sparse features and highlights target details, retaining unique source image information. Dictionary training has some computational complexity and is more sensitive to noise and pseudo-features.
    Pulse coupled neural network It is composed of various neurons, including reception, modulation, and pulse generation, and it is suitable for real-time image processing. It can effectively detect the edge and texture features of the image, and the edge information fusion effect is relatively good. The implementation requires multiple iterative computations. It has high operational coupling, many parameters, and is time-consuming.
    Pseudo-color-based methods The method maps the gray levels of a black-and-white or monochrome image to a color space or assigns corresponding colors to different gray levels. Images of different bands can be mapped to pseudo-color space, thus visually representing multi-band information for easy observation and understanding. The main function is to colorize the image, which cannot extract and fuse more information, and the ability to retain detailed information is relatively weak.
    下载: 导出CSV

    表  2  基于CNN和GAN的偏振图像融合算法对比

    Table  2.   Comparison of polarization image fusion algorithms based on CNN and GAN

    Methods Specificities Advantages Shortcomings
    CNN The complexity of the algorithm depends on the coding method and the design of fusion rules. The CNN-based fusion network has better feature learning and representation ability, which makes it more suitable for information extraction and feature fusion processes. CNN can automatically learn image features and patterns, which can simplify the process of algorithm design and implementation and greatly improve accuracy. It is widely used in the process of feature extraction and representation. The problem of overfitting may occur when training on small sample datasets. It may not be sensitive to the detailed information in the image and easily lose the details in the fusion process. Networks with deeper layers usually require a lot of computational resources and time.
    GAN The fusion process is modeled as an adversarial game between the generator and the discriminator. Through continuous learning optimization, the fusion result of the generator converges with the target image in terms of probability distribution. The feature extraction, fusion, and image reconstruction processes can be realized implicitly. The adversarial learning mechanism of the generator and discriminator enhances the realism and overall quality of the image fusion, better preserving the details and structural features of the source image. Training is performed unsupervised and usually does not require large amounts of labeled data. The training process is relatively unstable. The design and tuning process is relatively complex and requires reasonable selection and adjustment of the network architecture and loss function. It may lead to artifacts or unnatural problems in the generated images in some specific scenarios.
    下载: 导出CSV

    表  3  偏振图像数据集

    Table  3.   Polarization image dataset

    Source Waveband Year Quantity Resolution
    Reference[60] Visible band(Grayscale) 2019 120 1280×960
    Reference[61] Visible band(RGB) 2020 40 1024×768
    Reference[62-63] Long-wave infrared band 2020 2113 640×512
    Reference[47] Visible band(RGB) 2021 394 1224×1024
    Reference[64] Visible band(RGB) 2021 66 1848×2048
    Reference[65] Visible band(RGB) 2021 40 1024×1024
    下载: 导出CSV
  • [1] LI S, KANG X, FANG L, et al. Pixel-level image fusion: a survey of the state of the art[J]. Information Fusion, 2017, 33: 100-112. doi:  10.1016/j.inffus.2016.05.004
    [2] ZHANG H, XU H, TIAN X, et al. Image fusion meets deep learning: a survey and perspective[J]. Information Fusion, 2021, 76: 323-336. doi:  10.1016/j.inffus.2021.06.008
    [3] 罗海波, 张俊超, 盖兴琴, 等. 偏振成像技术的发展现状与展望(特邀)[J]. 红外与激光工程, 2022, 51(1): 101-110.

    LUO Haibo, ZHANG Junchao, GAI Xingqin, et al. Development status and prospect of polarization imaging technology (Invited)[J]. Infrared and Laser Engineering, 2022, 51(1): 101-110.
    [4] 周强国, 黄志明, 周炜. 偏振成像技术的研究进展及应用[J]. 红外技术, 2021, 43(9): 817-828. http://hwjs.nvir.cn/article/id/76230e4e-2d34-4b1e-be97-88c5023050c6

    ZHOU Qiangguo, HUANG Zhiming, ZHOU Wei. Research progress and application of polarization imaging technology[J]. Infrared Technology, 2021, 43(9): 817-828. http://hwjs.nvir.cn/article/id/76230e4e-2d34-4b1e-be97-88c5023050c6
    [5] 段锦, 付强, 莫春和, 等. 国外偏振成像军事应用的研究进展(上)[J]. 红外技术, 2014, 36(3): 190-195. http://hwjs.nvir.cn/article/id/hwjs201403003

    DUAN Jin, FU Qiang, MO Chunhe, et al. Review of polarization imaging technology for international military application(Ⅰ)[J]. Infrared Technology, 2014, 36(3): 190-195. http://hwjs.nvir.cn/article/id/hwjs201403003
    [6] 莫春和, 段锦, 付强, 等. 国外偏振成像军事应用的研究进展(下)[J]. 红外技术, 2014, 36(4): 265-270. http://hwjs.nvir.cn/article/id/hwjs201404002

    MO Chunhe, DUAN Jin, FU Qiang, et al. Review of polarization imaging technology for international military application(Ⅱ)[J]. Infrared Technology, 2014, 36(4): 265-270. http://hwjs.nvir.cn/article/id/hwjs201404002
    [7] 王霞, 赵家碧, 孙晶, 等. 偏振图像融合技术综述[J]. 航天返回与遥感, 2021, 42(6): 9-21.

    WANG Xia, ZHAO Jiabi, SUN Jing, et al. Review of polarization image fusion technology[J]. Aerospace Return and Remote Sensing, 2021, 42(6): 9-21.
    [8] LI X, YAN L, QI P, et al. Polarimetric imaging via deep learning: a review[J]. Remote Sensing, 2023, 15(6): 1540. doi:  10.3390/rs15061540
    [9] YANG Fengbao, DONG Anran, ZHANG Lei, et al. Infrared polarization image fusion based on combination of NSST and improved PCA[J]. Journal of Measurement Science and Instrumentation, 2016, 7(2): 176-184.
    [10] 杨风暴, 董安冉, 张雷, 等. DWT, NSCT和改进PCA协同组合红外偏振图像融合[J]. 红外技术, 2017, 39(3): 201-208. http://hwjs.nvir.cn/article/id/hwjs201703001

    YANG Fengbao, DONG Anran, ZHANG Lei, et al. Infrared polarization image fusion using the synergistic combination of DWT, NSCT and improved PCA[J]. Infrared Technology, 2017, 39(3): 201-208. http://hwjs.nvir.cn/article/id/hwjs201703001
    [11] 沈薛晨, 刘钧, 高明. 基于小波-Contourlet变换的偏振图像融合算法[J]. 红外技术, 2020, 42(2): 182-189. http://hwjs.nvir.cn/article/id/hwjs202002013

    SHEN Xuechen, LIU Jun, GAO Ming. Polarization image fusion algorithm based on Wavelet-Contourlet transform[J]. Infrared Technology, 2020, 42(2): 182-189. http://hwjs.nvir.cn/article/id/hwjs202002013
    [12] 张雨晨, 李江勇. 基于小波变换的中波红外偏振图像融合[J]. 激光与红外, 2020, 50(5): 578-582.

    ZHANG Yuchen, LI Jiangyong. Polarization image fusion based on wavelet transform[J]. Laser & Infrared, 2020, 50(5): 578-582.
    [13] 王策, 许素安. 基于Retinex和小波变换的水下偏振图像融合方法[J]. 应用激光, 2022, 42(8): 116-122.

    WANG Ce, XU Suan. Underwater polarization image fusion method based on Retinex and wavelet transform[J]. Applied Laser, 2022, 42(8): 116-122.
    [14] 陈锦妮, 陈宇洋, 李云红, 等. 基于结构与分解的红外光强与偏振图像融合[J]. 红外技术, 2023, 45(3): 257-265. http://hwjs.nvir.cn/article/id/379e87a8-b9c0-4081-820c-ccd63f3fe4f0

    CHEN Jinni, CHEN Yuyang, LI Yunhong, et al. Fusion of infrared intensity and polarized images based on structure and decomposition[J]. Infrared Technology, 2023, 45(3): 257-265. http://hwjs.nvir.cn/article/id/379e87a8-b9c0-4081-820c-ccd63f3fe4f0
    [15] LIU Y, LIU S, WANG Z. A general framework for image fusion based on multiscale transform and sparse representation[J]. Information Fusion, 2015, 24(C): 147-164.
    [16] 朱攀, 刘泽阳, 黄战华. 基于DTCWT和稀疏表示的红外偏振与光强图像融合[J]. 光子学报, 2017, 46(12): 207-215.

    ZHU Pan, LIU Zeyang, HUANG Zhanhua. Infrared polarization and intensity image fusion based on dual-tree complex wavelet transform and sparse representation[J]. Acta Photonica Sinica, 2017, 46(12): 207-215.
    [17] ZHU P, LIU L, ZHOU X. Infrared polarization and intensity image fusion based on bivariate BEMD and sparse representation[J]. Multimedia Tools and Applications, 2021, 80(3): 4455-4471. doi:  10.1007/s11042-020-09860-z
    [18] ZHANG S, YAN Y, SU L, et al. Polarization image fusion algorithm based on improved PCNN[C]//Proceedings of SPIE-The International Society for Optical Engineering, 2013, 9045.
    [19] 李世维, 黄丹飞, 王惠敏, 等. 基于BEMD和自适应PCNN的偏振图像融合[J]. 激光杂志, 2018, 39(3): 94-98.

    LI Shiwei, HUANG Danfei, WANG Huimin, et al. Polarization image fusion based on BEMD and adaptive PCNN[J]. Laser Journal, 2018, 39(3): 94-98.
    [20] 于津强, 段锦, 陈伟民, 等. 基于NSST与自适应SPCNN的水下偏振图像融合[J]. 激光与光电子学进展, 2020, 57(6): 103-113.

    YU Jinqiang, DUAN Jin, CHEN Weimin, et al. Underwater polarization image fusion based on NSST and adaptive SPCNN[J]. Laser & Optoelectronics Progress, 2020, 57(6): 103-113.
    [21] 叶松, 汤伟平, 孙晓兵, 等. 一种采用IHS空间表征偏振遥感图像的方法[J]. 遥感信息, 2006, 21(2): 11-13.

    YE Song, TANG Weiping, SUN Xiaobing, et al. Characterization of the polarized remote sensing images using IHS color system[J]. Remote Sensing Information, 2006, 21(2): 11-13.
    [22] 赵永强, 潘泉, 张洪才. 自适应多波段偏振图像融合研究[J]. 光子学报, 2007, 36(7): 1356-1359.

    ZHAO Yongqiang, PAN Quan, ZHANG Hongcai. Research on adaptive multi-band polarization image fusion[J]. Acta Photonica Sinica, 2007, 36(7): 1356-1359.
    [23] 赵永强, 潘泉, 张洪才. 一种新的全色图像与光谱图像融合方法研究[J]. 光子学报, 2007, 36(1): 180-183.

    ZHAO Yongqiang, PAN Quan, ZHANG Hongcai. A new spectral and panchromatic images fusion method[J]. Acta Photonica Sinica, 2007, 36(1): 180-183.
    [24] 周浦城, 韩裕生, 薛模根, 等. 基于非负矩阵分解和IHS颜色模型的偏振图像融合方法[J]. 光子学报, 2010, 39(9): 1682-1687.

    ZHOU Pucheng, HAN Yusheng, XUE Menggen, et al. Polarization image fusion method based on non-negative matrix factorization and IHS color model[J]. Acta Photonica Sinica, 2010, 39(9): 1682-1687.
    [25] 周浦城, 张洪坤, 薛模根. 基于颜色迁移和聚类分割的偏振图像融合方法[J]. 光子学报, 2011, 40(1): 149-153.

    ZHOU Pucheng, ZHANG Hongkun, XUE Mogen. Polarization image fusion method using color transfer and clustering-based segmentation[J]. Acta Photonica Sinica, 2011, 40(1): 149-153.
    [26] 李伟伟, 杨风暴, 蔺素珍, 等. 红外偏振与红外光强图像的伪彩色融合研究[J]. 红外技术, 2012, 34(2): 109-113. doi:  10.3969/j.issn.1001-8891.2012.02.010

    LI Weiwei, YANG Fengbao, LIN Suzhen, et al. Study on pseudo-color fusion of infrared polarization and intensity image[J]. Infrared Technology, 2012, 34(2): 109-113. doi:  10.3969/j.issn.1001-8891.2012.02.010
    [27] 孙晶. 多波段偏振图像融合方法研究[D]. 北京: 北京理工大学, 2019.

    SUN Jing. Research on Multi-band Polarization Image Fusion Method[D]. Beijing: Beijing Institute of Technology, 2019.
    [28] 苏子航. 多波段偏振图像信息校正与增强技术研究[D]. 北京: 北京理工大学, 2021.

    SU Zihang. Research on Multi-band Polarization Image Information Correction and Enhancement Technology[D]. Beijing: Beijing Institute of Technology, 2021.
    [29] HU J, MOU L, Schmitt A, et al. FusioNet: a two-stream convolutional neural network for urban scene classification using PolSAR and hyperspectral data[C]//Proceedings of the 2017 Joint Urban Remote Sensing Event (JURSE), 2017: 1-4.
    [30] ZHANG J, SHAO J, CHEN J, et al. PFNet: an unsupervised deep network for polarization image fusion[J]. Optics Letters, 2020, 45(6): 1507-1510. doi:  10.1364/OL.384189
    [31] WANG S, MENG J, ZHOU Y, et al. Polarization image fusion algorithm using NSCT and CNN[J]. Journal of Russian Laser Research, 2021, 42(4): 443-452. doi:  10.1007/s10946-021-09981-2
    [32] ZHANG J, SHAO J, CHEN J, et al. Polarization image fusion with self-learned fusion strategy[J]. Pattern Recognition, 2021, 118(22): 108045.
    [33] XU H, SUN Y, MEI X, et al. Attention-Guided polarization image fusion using salient information distribution[J]. IEEE Transactions on Computational Imaging, 2022, 8: 1117-1130. doi:  10.1109/TCI.2022.3228633
    [34] 闫德利, 申冲, 王晨光, 等. 强度图像和偏振度图像融合网络的设计[J]. 光学精密工程, 2023, 31(8): 1256-1266.

    YAN Deli, SHEN Chong, WANG Chenguang, et al. Design of intensity image and polarization image fusion network[J]. Optics and Precision Engineering, 2023, 31(8): 1256-1266.
    [35] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Advances in Neural Information Processing Systems, 2014: 2672-2680.
    [36] MA J, YU W, LIANG P, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. doi:  10.1016/j.inffus.2018.09.004
    [37] ZHAO C, WANG T, LEI B, Medical image fusion method based on dense block and deep convolutional generative adversarial network[J]. Neural Comput. & Applic., 2021, 33: 6595-6610.
    [38] LIU Q, ZHOU H, XU Q, et al. PSGAN: a generative adversarial network for remote sensing image pan-sharpening[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(12): 10227-10242. doi:  10.1109/TGRS.2020.3042974
    [39] MA J, XU H, JIANG J, et al. DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4980-4995. doi:  10.1109/TIP.2020.2977573
    [40] LI J, HUO H, LI C, et al. Attention FGAN: infrared and visible image fusion using attention-based generative adversarial networks[J]. IEEE Transactions on Multimedia, 2021, 23: 1383-1396. doi:  10.1109/TMM.2020.2997127
    [41] MA J, ZHANG H, SHAO Z, et al. GANMcC: a generative adversarial network with multi-classification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-14.
    [42] WEN Z, WU Q, LIU Z, et al. Polar-spatial feature fusion learning with variational generative-discriminative network for PolSAR classi-fication[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(11): 8914-8927. doi:  10.1109/TGRS.2019.2923738
    [43] DING X, WANG Y, FU X. Multi-polarization fusion generative adversarial networks for clear underwater imaging[J]. Optics and Lasers in Engineering, 2022, 152: 106971. doi:  10.1016/j.optlaseng.2022.106971
    [44] LIU J, DUAN J, HAO Y, et al. Semantic-guided polarization image fusion method based on a dual-discriminator GAN[J]. Optic Express, 2022, 30: 43601-43621. doi:  10.1364/OE.472214
    [45] SUN R, SUN X, CHEN F, et al. An artificial target detection method combining a polarimetric feature extractor with deep convolutional neural networks[J]. International Journal of Remote Sensing, 2020, 41: 4995-5009. doi:  10.1080/01431161.2020.1727584
    [46] ZHANG Y, Morel O, Blanchon M, et al. Exploration of deep learning based multimodal fusion for semantic road scene segmen-tation[C]//14th International Conference on Computer Vision Theory and Applications, 2019: 336-343.
    [47] XIANG K, YANG K, WANG K. Polarization-driven semantic segmentation via efficient attention-bridged fusion[J]. Optic Express, 2021, 29: 4802-4820. doi:  10.1364/OE.416130
    [48] 霍永胜. 基于偏振的暗通道先验去雾[J]. 物理学报, 2022, 71(14): 112-120.

    HUO Yongsheng. Polarization-based research on a priori defogging of dark channel[J]. Acta Physica Sinica, 2022, 71(14): 112-120.
    [49] 孟宇飞, 王晓玲, 刘畅, 等. 四分暗通道均值比较法的双角度偏振图像去雾[J]. 激光与光电子学进展, 2022, 59(4): 232-240.

    MENG Yufei, WANG Xiaoling, LIU Chang, et al. Dehazing of dual angle polarization image based on mean comparison of quartering dark channels[J]. Laser & Optoelectronics Progress, 2022, 59(4): 232-240.
    [50] 张肃, 战俊彤, 付强, 等. 基于多小波融合的偏振探测去雾技术[J]. 激光与光电子学进展, 2018, 55(12): 468-477.

    ZHANG Su, ZHAN Juntong, FU Qiang, et al. Polarization detection defogging technology based on multi-wavelet fusion[J]. Laser & Optoelectronics Progress, 2018, 55(12): 468-477.
    [51] HUANG F, KE C, WU X, et al. Polarization dehazing method based on spatial frequency division and fusion for a far-field and dense hazy image[J]. Applied Optics, 2021, 60: 9319-9332. doi:  10.1364/AO.434886
    [52] 周文舟, 范晨, 胡小平, 等. 多尺度奇异值分解的偏振图像融合去雾算法与实验[J]. 中国光学, 2021, 14(2): 298-306.

    ZHOU Wenzhou, FAN Chen, HU Xiaoping, et al. multi-scale singular value decomposition polarization image fusion defogging algorithm and experiment[J]. Chinese Optics, 2021, 14(2): 298-306.
    [53] 李轩, 刘飞, 邵晓鹏. 偏振三维成像技术的原理和研究进展[J]. 红外与毫米波学报, 2021, 40(2): 248-262.

    LI Xuan, LIU Fei, SHAO Xiaopeng. Research progress on polarization 3D imaging technology[J]. Journal of Infrared and Millimeter Waves, 2021, 40(2): 248-262.
    [54] 王霞, 赵雨薇, 金伟其. 融合光学偏振的三维成像技术进展(特邀)[J]. 光电技术应用, 2022, 37(5): 33-43.

    WANG Xia, ZHAO Yuwei, JIN Weiqi. Overview of polarization-based three-dimensional imaging techniques(Invited)[J]. Opto-electronic Technology Application, 2022, 37(5): 33-43.
    [55] 杨锦发, 晏磊, 赵红颖, 等. 融合粗糙深度信息的低纹理物体偏振三维重建[J]. 红外与毫米波学报, 2019, 38(6): 819-827.

    YANG Jinfa, YAN Lei, ZHAO Hongying, et al. Shape from polarization of low-texture objects with rough depth information[J]. Journal of Infrared and Millimeter Waves, 2019, 38(6): 819-827.
    [56] 张瑞华, 施柏鑫, 杨锦发, 等. 基于视差角和天顶角优化的偏振多视角三维重建[J]. 红外与毫米波学报, 2021, 40(1): 133-142.

    ZHANG Ruihua, SHI Baixin, YANG Jinfa, et al. Polarization multi-view 3D reconstruction based on parallax angle and zenith angle optimization[J]. Journal of Infrared and Millimeter Wave, 2021, 40(1): 133-142.
    [57] BA Y, Gilbert A, WANG F, et al. Deep shape from polarization[C]//Computer Vision–ECCV 2020: 16th European Conference, 2020: 554-571.
    [58] 陈创斌. 基于偏振信息的表面法线估计[D]. 广州: 广东工业大学, 2021.

    CHEN Chuangbin. Surface Normal Estimation Based on Polarization Information[D]. Guangzhou: Guangdong University of Technology, 2021.
    [59] 王晓敏. 融合偏振和光场信息的低纹理目标三维重建算法研究[D]. 太原: 中北大学, 2022.

    WANG Xiaomin. Research on Low Texture Target 3D Reconstruction Algorithm Integrating Polarization and Light Field Information[D]. Taiyuan: North University of China, 2022.
    [60] ZENG X, LUO Y, ZHAO X, et al. An end-to-end fully-convolutional neural network for division of focal plane sensors to reconstruct S0, DoLP, and AoP[J]. Optic Express, 2019, 27: 8566-8577. doi:  10.1364/OE.27.008566
    [61] Morimatsu M, Monno Y, Tanaka M, et al. Monochrome and color polarization demosaicking using edge-aware residual interpolation [C]//2020 IEEE International Conference on Image Processing(ICIP), 2020: 2571-2575.
    [62] LI N, ZHAO Y, PAN Q, et al. Full-time monocular road detection using zero-distribution prior of angle of polarization[C]//European Conference on Computer Vision (ECCV), 2020: 457-473.
    [63] LI N, ZHAO Y, PAN Q, et al. Illumination-invariant road detection and tracking using LWIR polarization characteristics[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, 180: 357-369. doi:  10.1016/j.isprsjprs.2021.08.022
    [64] SUN Y, ZHANG J, LIANG R. Color polarization demosaicking by a convolutional neural network[J]. Optic Letter, 2021, 46: 4338-4341. doi:  10.1364/OL.431919
    [65] QIU S, FU Q, WANG C, et al. Linear polarization demosaicking for monochrome and colour polarization focal plane arrays[J]. Computer Graphics Forum, 2021, 40: 77-89. doi:  10.1111/cgf.14204
  • 加载中
图(6) / 表(3)
计量
  • 文章访问数:  269
  • HTML全文浏览量:  49
  • PDF下载量:  87
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-06-05
  • 修回日期:  2023-08-09
  • 刊出日期:  2024-02-20

目录

    /

    返回文章
    返回