Infrared and Visible Image Fusion Based on NSCT Combined with Saliency Map and Region Energy
-
摘要: 针对传统的红外与可见光图像融合出现的清晰度和对比度偏低,目标不够突出的问题,本文提出了一种基于Non-subsampled Contourlet(NSCT)变换结合显著图与区域能量的融合方法。首先,使用改进的频率调谐(Frequency-tuned, FT)方法求出红外图像显著图并归一化得到显著图权重,单尺度Retinex(Single-scale Retinex, SSR)处理可见光图像。其次,使用NSCT分解红外与可见光图像,并基于归一化显著图与区域能量设计新的融和权重来指导低频系数融合,解决了区域能量自适应加权容易引入噪声的问题;采用改进的“加权拉普拉斯能量和”指导高频系数融合。最后,通过逆NSCT变换求出融合图像。本文方法与7种经典方法在6组图像中进行对比实验,在信息熵、互信息、平均梯度和标准差指标中最优,在空间频率中第一组图像为次优,其余图像均为最优结果。融合图像信息量丰富、清晰度高、对比度高并且亮度适中易于人眼观察,验证了本文方法的有效性。
-
关键词:
- 图像融合 /
- Non-subsampled Contourlet变换 /
- 区域能量自适应加权 /
- 拉普拉斯能量和
Abstract: To address the problems of low clarity and contrast of indistinct targets in traditional infrared and visible image-fusion algorithms, this study proposes a fusion method based on non-subsampled contourlet transform (NSCT) combined with a saliency map and region energy. First, an improved frequency-tuning (FT) method is used to obtain the infrared image saliency map, which is subsequently normalized to obtain the saliency map weight. A single-scale retinex (SSR) algorithm is then used to enhance the visible image. Second, NSCT is used to decompose the infrared and visible images, and a new fusion weight is designed based on the normalized saliency map and region energy to guide low-frequency coefficient fusion. This solves the problem of region-energy adaptive weighting being prone to introducing noise, and the improved "weighted Laplace energy sum" is used to guide the fusion of high-frequency coefficients. Finally, the fused image is obtained by inverse NSCT. Six groups of images were used to compare the proposed method with seven classical methods. The proposed method outperformed others in terms of information entropy, mutual information, average gradient, and standard deviation. Regarding spatial frequency, the first group of images was second best, and the remaining images exhibited the best results. The fusion images displayed rich information, high resolution, high contrast, and moderate brightness, demonstrating suitability for human observation, which verifies the effectiveness of this method. -
0. 引言
星敏感器是当前广泛应用的天体敏感器[1-2],其工作环境必然会受到安装面温度变化等影响,光机结构的热变形会导致镜片面型变化,从而影响光学系统成像质量下降[3-4],会对弥散斑产生较大的影响,为了保证系统的成像质量,需要对结构完成光机热集成分析,分析环境温度对镜头的影响[5]。
随着航天事业的发展,星敏感器已经广泛应用于多种场合[6],由于我国星敏感器研究起步较晚,国内对长焦距大口径星敏感器的研究相对较少,孟祥月[7]等研制了焦距50 mm,入瞳直径40 mm的星敏感器。孙东起[8]等人研制了一种焦距200 mm,入瞳直径125 mm的双高斯光学系统的长焦距星敏感器。伍雁雄[9]等研制了焦距200 mm,入瞳直径100 mm的高精度星敏感器。
本文设计了一种大口径热不敏星敏感器,光学系统焦距900 mm,入瞳直径200 mm,光谱范围450~750 nm,通过光机热集成分析方法对系统进行热分析,通过将Nastran计算的主次镜表面节点刚体位移代入Sigfit光机热耦合软件进行Zernike多项式拟合,再将主次镜表面Zernike系数导入Zemax光学设计软件中,分析了由于温度变化导致的光机结构刚体位移等变化。
1. 光机设计
1.1 热不敏光学系统
光学系统参数:焦距范围为900 mm,入瞳直径D≥200 mm,光谱范围为470~900 nm,在热不敏光学系统安装面温度为20℃±5℃时,其光轴偏角优于1″,0.8视场下80%能量集中在9.2~18.4 μm之间。光学系统结构如图 1所示,0.8视场下各波段圈入能量曲线如图 2所示,可以看出满足80%能量集中时,弥散斑直径满足指标要求。
1.2 结构设计
本系统采用改进型卡式系统,为保证主镜和后接透镜组的同轴度,选用中心固定形式,主镜材料选用微晶玻璃,为达到热不敏效果,减少温度变化对结构的影响,主镜轴材料应选用与主镜材料热膨胀系数相近的殷钢,主镜通过胶层与主镜轴固定连接,主镜轴作为整个系统的连接构件,具有一定的刚性,而胶层的柔性能够很好的减少重力、温度等对主镜产生的变形影响,主镜结构如图 3所示。
次镜是非常敏感的光学构件,微小的变化都会带来很大影响,并且支架的大小直接影响光学系统的中心遮挡大小,为保证结构稳定、中心遮挡小以及减小加工难度等原因采用三片殷钢片连接主次镜,能够有效减少温度等因素引起的主次镜间距的变化,支撑结构如图 4所示。
透镜组通过压圈固定方式保证镜片间间距,镜筒材料采用A704能够减轻结构质量,并且在后端机械结构上留有两个接口方便后续探测器接入,系统整体结构如图 5所示。
2. 光机热集成分析
在本系统中,主次镜结构的稳定性对成像质量的影响最大,本次分析只对主次镜结构进行仿真,分析目的是验证主次镜结构在20±5℃范围内是否满足光学系统设计指标要求。
2.1 建立有限元模型
通过MSC.Patran建立模型如图 6所示,整个模型采用手工划分网格的方法,控制网格疏密,使得计算结果更加精确,模型主要六面体单元及少量的五面体建模,共有单元数12172个,节点数18707个,结构有限元建模计算中主次镜及支撑结构的材料及其属性参数如表 1所示。
表 1 选用材料属性参数Table 1. Selected material property parametersMaterial Elasticity modulus Ea/MPa Poisson ratio μ Density ρ/(103 kg/m3) CTE α/
(10-6mm/℃)Invar 141000 0.25 8.1 0.2 TC4 114000 0.29 4.4 8.9 Microcrystalline glass 90600 0.24 2.53 0.5 D04 RTV 850 0.40 1.15 236 按照指标要求的环境温度25℃,对主次镜模型施加温度载荷,利用Nastran软件计算得到刚体位移结果,主次镜刚体位移云图如图 7所示,可以看出主镜最大轴向位移为0.228 μm,次镜最大轴向位移为0.986 μm,目前来看热变形结果还在可控范围内。
利用光机热耦合工具Sigfit输入系统主次镜的曲率半径、主次镜表面节点位置数据、热变形后主次镜表面节点变化数据等进行拟合。温度为25℃时,Sigfit拟合得到的Zernike多项式系数[10]如表 2所示。
表 2 Zernike系数Table 2. Zernike coefficientSerial number Expression Value (The main mirror) Value(The secondary mirror) 1 1 1.53E-05 7.30E-06 2 ρcosθ 6.47E-10 1.15E-08 3 ρsinθ 4.36E-10 1.09E-11 4 2ρ2-1 -1.07E-04 1.26E-05 5 ρ2cos2θ -8.13E-08 -1.41E-10 6 ρ2sin2θ 4.33E-08 2.76E-10 7 (3ρ2-2ρ)cos2θ 3.62E-09 1.74E-11 8 (3ρ2-2ρ)sin2θ 5.39E-09 -1.85E-08 9 6ρ4-6ρ2+1 3.63E-06 -3.7E-08 将主次镜的Zernike多项式系数导入Zemax光学设计软件中,即可得到系统弥散斑直径以及光轴的变化,图 8给出了在环境温度25℃,0.8°视场下各波段的圈入能量曲线图,由图中信息可知,各波段80%能量弥散斑直径集中在9.2~18.4 μm之间,与图 2对比可知在温度的影响下,各波段的弥散斑直径也会增大。同时由图 9得到波前RMS(Root-Mean-Square)值为0.035λ<1/12λ,成像质量良好,调用评价函数RAID指令,在0°视场入射光线与像面法线夹角可以近似为光轴偏角约为0.033″优先于1″。
2.2 装调测试
为检验光机热集成分析的准确性以及光机设计的合理性,设置实验室20±5℃的温度条件下,进行光学系统主镜、次镜以及透镜组系统装调,主镜及透镜组利用三坐标进行检测装调,保证其位置精度,然后利用干涉仪进行次镜的装调工作,系统整体装调结构如图 10所示。
在实验室室温25℃下,系统装调后的轴上视场波像差如图 11所示,RMS值为0.08λ,所测得RMS值与有限元分析结果相差很小,分析实例验证了本系统分析方法的有效性。
2.3 系统性能测试
本次测试温度环境分别设为15℃、20℃、25℃,采用平行光管照射,镜头放置在精密旋转的调整台上,通过对镜头的成像光斑与能量分布进行分析获得弥散斑,检测图如图 12所示,记录3组数据取平均值最终结果如图 13所示,由此可见各波段均符合在0.8视场下集中80%能量时,弥散斑直径在9.2~18.4 μm区间的指标要求。
在20±5℃温度范围内,通过对0°视场像点观测,由公式(1)可知:
$$ \frac{a}{f} \times \frac{{180^\circ }}{{\text{π }}} \times 3600 < 1'' $$ (1) 式中:像元大小a为4.6 μm,焦距f为900 mm,经过计算只要像点偏移小于一个像元即可认为光轴偏角优于1″。经过观察,像点最大位移小于一个像元,故可以判断光轴偏角优于1″,满足指标要求。通过对弥散斑直径以及光轴漂移量的检测结果与仿真分析结果对比发现光机热集成分析具有可靠性,所以有必要对系统进行光机热集成分析以快速检验设计的系统是否满足指标。
3. 结论
本文通过对热不敏光学系统进行结构设计,并对结构进行有限元分析,结合光机热集成分析方法,通过sigfit计算出在20±5℃下主次镜RMS值为0.13λ,将拟合得到的Zernike系数代入光学设计软件Zemax中进行仿真模拟,设计结果表明光轴偏角为0.023″优于1″,波前RMS值为0.035λ,圈入能量80%集中度弥散斑直径在9.2~18.4 μm之间,最终进行装调检测,结果显示系统轴上视场波像差RMS值为0.08λ,实现弥散斑能量80%集中度的直径在9.2~18.4 μm内,像点最大位移小于一个像元,光轴偏角优先于1″,满足项目设计指标要求。该分析方法能够准确地验证系统是否满足指标要求,极大地缩短了研制周期,能够对系统性能进行有效的评估,同时可以将该方法运用到其他光学系统光机热集成分析中。
-
表 1 Nato_camp客观评价结果
Table 1 Objective evaluation results of Nato_camp
Methods IE SF MI AG SD LP 6.6022 13.9059 1.8726 5.6680 28.3882 RP 6.5073 14.7712 1.7414 5.8658 27.3435 DTCWT 6.4292 12.8851 1.8237 5.1054 25.6576 CVT 6.3256 11.8559 1.7603 4.7871 24.0154 NSCT 6.6369 13.5828 1.8622 5.6284 28.6006 DCHWT 6.3231 11.0041 1.7605 4.4255 25.8427 Hybrid_MSD 6.7020 15.1560 2.0825 6.0049 29.1284 Proposed 7.0235 14.8216 2.6280 6.2154 37.1931 表 2 Tree评价结果
Table 2 Objective evaluation results of Tree
Methods IE SF MI AG SD LP 5.8484 7.7347 1.6911 3.0154 14.7225 RP 5.8788 9.0936 1.5948 3.2999 15.1486 DTCWT 5.7484 7.1703 1.6836 2.7127 13.5645 CVT 5.7172 6.8121 1.6988 2.5939 13.0307 NSCT 5.8498 7.4456 1.6643 2.9577 14.6721 DCHWT 6.0325 6.1470 1.5695 2.4196 16.3405 Hybrid_MSD 6.2954 8.5220 1.8939 3.2627 20.2859 Proposed 6.8678 11.3701 3.1874 4.8995 31.3029 表 3 Duine客观评价结果
Table 3 Objective evaluation results of Duine
Methods IE SF MI AG SD LP 5.9780 7.4565 1.5175 3.3497 15.4549 RP 5.8923 6.9660 1.5174 3.0050 14.5825 DTCWT 5.8808 6.7566 1.5059 3.0241 14.4632 CVT 5.8322 6.5233 1.4829 2.9103 14.0198 NSCT 5.9806 7.0812 1.5181 3.2809 15.5069 DCHWT 5.8015 5.7144 1.5240 2.5810 13.6918 Hybrid_MSD 5.9472 8.2933 1.5553 3.6047 15.2660 Proposed 7.1126 13.2301 3.1155 6.1093 35.0350 表 4 APC_4客观评价结果
Table 4 Objective evaluation results of APC_4
Methods IE SF MI AG SD LP 5.8574 12.7384 0.9065 5.5624 14.5848 RP 5.6555 12.4937 0.7502 5.0838 12.9853 DTCWT 5.6766 11.7014 0.8259 5.0618 12.8683 CVT 5.5498 11.3298 0.7902 4.8656 11.6973 NSCT 5.8743 12.1900 0.8733 5.4444 14.7461 DCHWT 5.5262 9.7761 0.8671 4.2252 11.4534 Hybrid_MSD 5.9514 14.0328 1.0230 5.9508 15.5852 Proposed 6.6951 16.5562 2.8247 7.5079 25.6268 表 5 Kaptein_1654客观评价结果
Table 5 Objective evaluation results of Kaptein_1654
Methods IE SF MI AG SD LP 6.6557 19.0519 2.2753 6.8009 36.8878 RP 6.7122 19.7509 2.2324 6.9286 34.3867 DTCWT 6.4858 17.9294 2.2036 6.3067 31.2040 CVT 6.3880 16.3414 2.2711 5.6601 28.7014 NSCT 6.6451 18.7667 2.2189 6.7824 35.7430 DCHWT 6.7437 15.4424 2.2980 5.5617 36.3196 Hybrid_MSD 6.8692 20.4999 2.1659 7.1597 37.6306 Proposed 7.0479 20.6661 3.0611 7.7455 53.3902 表 6 Movie_18客观评价结果
Table 6 Objective evaluation results of Movie_18
Methods IE SF MI AG SD LP 5.9545 8.3646 1.7206 3.0967 17.9892 RP 5.7520 9.0084 1.5003 3.1423 16.3032 DTCWT 5.6640 7.8037 1.6294 2.8594 14.5863 CVT 5.5991 7.6734 1.6439 2.8605 13.9172 NSCT 5.8977 8.2244 1.6706 3.0956 17.1809 DCHWT 5.5755 6.6402 1.6596 2.4455 15.0237 Hybrid_MSD 6.2333 9.2791 2.3197 3.3676 21.1429 Proposed 6.8108 10.0577 3.3102 3.7343 47.7950 -
[1] 杨孙运, 奚峥皓, 王汉东, 等. 基于NSCT和最小化-局部平均梯度的图像融合[J]. 红外技术, 2021, 43(1): 13-20. http://hwjs.nvir.cn/article/id/144252d1-978c-4c1e-85ad-e0b8d5e03bf6 YANG Sunyun, XI Zhenghao, WANG Handong, et al. Image fusion based on nsct and minimum-local mean gradient[J]. Infrared Technology, 2021, 43(1): 13-20. http://hwjs.nvir.cn/article/id/144252d1-978c-4c1e-85ad-e0b8d5e03bf6
[2] 肖中杰. 基于NSCT红外与可见光图像融合算法优化研究[J]. 红外技术, 2017, 39(12): 1127-1130. http://hwjs.nvir.cn/article/id/hwjs201712010 XIAO Zhongjie. Improved infrared and visible light image fusion algorithm based on NSCT[J]. Infrared Technology, 2017, 39(12): 1127-1130. http://hwjs.nvir.cn/article/id/hwjs201712010
[3] ZHANG B, LU X, PEI H, et al. A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled Shearlet transform[J]. Infrared Physics & Technology, 2015, 73: 286-297.
[4] ZHOU Z, WANG B, LI S, et al. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters[J]. Information Fusion, 2016, 30: 15-26. DOI: 10.1016/j.inffus.2015.11.003
[5] MA J, YU W, LIANG P, et al. Fusion GAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. DOI: 10.1016/j.inffus.2018.09.004
[6] ZHANG Y, ZHANG L, BAI X, et al. Infrared and visual image fusion through infrared feature extraction and visual information preservation[J]. Infrared Physics & Technology, 2017, 83: 227-237.
[7] Mertens T, Kautz J, Reeth F V. Exposure fusion[C]//Proceedings of Pacific Conference on Computer Graphics and Applications, 2007: 382-390.
[8] ZHANG Z, Blum R S. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application[C]//Proc. IEEE, 1999, 87(8): 1315-1326.
[9] Lewis J J, Callaghan R J O, Nikolov S G, et al. Pixel- and region-based image fusion with complex wavelets[J]. Inf. Fus., 2007, 8(2): 119-130. DOI: 10.1016/j.inffus.2005.09.006
[10] Nencini F, Garzelli A, Baronti S, et al. Remote sensing image fusion using the curvelet transform[J]. Inf. Fus., 2007, 8(2): 143-156. DOI: 10.1016/j.inffus.2006.02.001
[11] YANG S, WANG M, JIAO L, et al. Image fusion based on a new contourlet packet[J]. Inf. Fus., 2010, 11(2): 78-84. DOI: 10.1016/j.inffus.2009.05.001
[12] Cunha A L, ZHOU J P, DO M N. The nonsubsampled Contourlet transform: theory, design, and applications [J]. IEEE Transactions on Image Processing, 2006, 15(10): 3089-3101. DOI: 10.1109/TIP.2006.877507
[13] 郭明, 符拯, 奚晓梁. 基于局部能量的NSCT域红外与可见光图像融合算法[J]. 红外与激光工程, 2012, 41(8): 2229-2235. GUO Ming, FU Zheng, XI Xiaoliang. Novel fusion algorithm for infrared and visible images based on local energy in NSCT domain[J]. Infrared and Laser Engineering, 2012, 41(8): 2229-2235.
[14] Cands E J, Donoho D L. Curvelets and curvilinear integrals[J]. J. Approximation Theor., 2001, 113(1): 59-90. DOI: 10.1006/jath.2001.3624
[15] DO M N, Vetterli M. Contourlets: a directional multiresolution image representation[C]//Proceedings of IEEE International Conference on Image Processing, 2002, 1: I-357-I-360.
[16] ZHAI Y, SHAH M. Visual attention detection in video se-quencesusing spatiotemporal cues[C]//Proceedings of the 14th ACM Conference on Multimedia, 2006: 815-824.
[17] Achanta R, Hemami S, Estrada F, et al. Frequency-tuned salient region detection[C]// IEEE Conference on Computer Vision & Pattern Recognition, 2009: 1597-1604.
[18] 叶坤涛, 李文, 舒蕾蕾, 等. 结合改进显著性检测与NSST的红外与可见光图像融合方法[J]. 红外技术, 2021, 43(12): 1212-1221. http://hwjs.nvir.cn/article/id/bfd9f932-e0bd-4669-b698-b02d42e31805 YE Kuntao, LI Wen, SHU Leilei, LI Sheng. Infrared and visible image fusion method based on improved saliency detection and non-subsampled shearlet transform[J]. Infrared Technology, 2021, 43(12): 1212-1221. http://hwjs.nvir.cn/article/id/bfd9f932-e0bd-4669-b698-b02d42e31805
[19] 唐中剑, 毛春. 基于显著导向的可见光与红外图像融合算法[J]. 太赫兹科学与电子信息学报, 2021, 19(1): 125-131. TANG Zhongjian, MAO Chun. Visible and infrared image fusion algorithm based on saliency guidance[J]. Journal of Terahertz Science and Electronic Information Technology, 2021, 19(1): 125-131.
[20] 王惠琴, 吕佳芸, 张伟. 基于双边滤波-BM3D算法的GPR图像去噪[J]. 兰州理工大学学报, 2022, 48(1): 91-97. WANG Huiqin, LYU Jiayun, ZHANG Wei. GPR image denoising based on bilateral filtering BM3D algorithm[J]. Journal of Lanzhou University of Technology, 2022, 48(1): 91-97.
[21] Edwin H Land. The retinex theory of color vision [J]. Scientific American, 1977, 237(6): 108-129. DOI: 10.1038/scientificamerican1277-108
[22] 程铁栋, 卢晓亮, 易其文, 等. 一种结合单尺度Retinex与引导滤波的红外图像增强方法[J]. 红外技术, 2021, 43(11): 1081-1088. http://hwjs.nvir.cn/article/id/b49a0a09-e295-40e6-9736-24a58971206e CHENG Tiedong, LU Xiaoliang, YI Qiwen, et al. Research on infrared image enhancement method combined with single-scale retinex and guided image filter[J]. Infrared Technology, 2021, 43(11): 1081-1088. http://hwjs.nvir.cn/article/id/b49a0a09-e295-40e6-9736-24a58971206e
[23] 翟海祥, 何嘉奇, 王正家, 等. 改进Retinex与多图像融合算法用于低照度图像增强[J]. 红外技术, 2021, 43(10): 987-993. http://hwjs.nvir.cn/article/id/23500140-4bab-40b3-9bef-282f14dc105e ZHAI Haixiang, HE Jiaqi, WANG Zhengjia, et al. Improved Retinex and multi-image fusion algorithm for low illumination image enhancemen[J]. Infrared Technology, 2021, 43(10): 987-993. http://hwjs.nvir.cn/article/id/23500140-4bab-40b3-9bef-282f14dc105e
[24] LIU Y, LIU S, WANG Z. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24: 147-167.
[25] Kumar B. Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform[J]. Signal, Image & Video Processing, 2013, 7: 1125-1143. DOI: 10.1007/s11760-012-0361-x
[26] ZHOU Z, BO W, SUN L, et al. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters[J]. Information Fusion, 2016, 30: 15-26.
[27] Toet Alexander. TNO image fusion dataset[EB/OL]. 2014, https://doi.org/10.6084/m9.figshare.1008029.v1.
[28] 张小利, 李雄飞, 李军. 融合图像质量评价指标的相关性分析及性能评估[J]. 自动化学报, 2014, 40(2): 306-315. ZHANG Xiaoli, LI Xiongfei, LI Jun. Validation and correlation analysis of metrics for evaluating performance of image fusion[J]. Acta Automatica Sinica, 2014, 40(2): 306-315.
-
期刊类型引用(0)
其他类型引用(1)