基于DS证据理论的双路视频自适应融合算法

Adaptive Fusion Algorithm for Dual-Channel Videos Based on DS Evidence Theory

  • 摘要: 针对现有融合算法无法根据显著帧特征变化动态调整融合策略,造成融合效果差的问题,提出了一种基于DS证据理论的自适应融合方法。首先计算跨模态帧内差异特征,利用K-means聚类及可能性框架构建基本信念分配函数;其次构建证据体间及证据体内评估准则矩阵,并计算熵权得到各准则可靠度权重;然后基于决策距离构建偏好函数并计算偏好优序指数和净流量,以得到各证据体重要性权重,为DS证据合成提供可靠证据源;接着利用折扣系数法对证据体信念度重新修正后合成,对视频帧的变化情况进行决策;最后以帧内显著差异特征为指导逐层选择最优融合变元实现自适应融合。定量和定性分析结果显示该方法融合质量显著优于单一融合算法。

     

    Abstract: To address the problem that existing fusion algorithms cannot dynamically adjust fusion strategies based on feature changes within significant frames, resulting in poor fusion effects, an adaptive fusion method based on DS evidence theory is proposed in this paper. Initially, cross-modal intra-frame difference features are computed, followed by the construction of basic belief assignment functions through K-means clustering within a possibility framework. Subsequently, evaluation criterion matrices for both inter-evidence and intraevidence bodies are established, and criterion reliability weights are determined through entropy weighting. Furthermore, a preference function based on decision distance is constructed to calculate preference priority indices and net flow, obtaining importance weights for each evidence body and providing reliable evidence sources for DS evidence synthesis. The belief degrees of evidence bodies are subsequently recalibrated using the discount coefficient method and synthesized, after which decisions on video frame changes are made based on the synthesis results. Finally, guided by intra-frame salient difference features, optimal fusion operators are hierarchically selected to achieve adaptive fusion. Quantitative and qualitative analyses show that the proposed method significantly outperforms single fusion algorithms in fusion quality.

     

/

返回文章
返回