Abstract:
To address the problem that existing fusion algorithms cannot dynamically adjust fusion strategies based on feature changes within significant frames, resulting in poor fusion effects, an adaptive fusion method based on DS evidence theory is proposed in this paper. Initially, cross-modal intra-frame difference features are computed, followed by the construction of basic belief assignment functions through K-means clustering within a possibility framework. Subsequently, evaluation criterion matrices for both inter-evidence and intraevidence bodies are established, and criterion reliability weights are determined through entropy weighting. Furthermore, a preference function based on decision distance is constructed to calculate preference priority indices and net flow, obtaining importance weights for each evidence body and providing reliable evidence sources for DS evidence synthesis. The belief degrees of evidence bodies are subsequently recalibrated using the discount coefficient method and synthesized, after which decisions on video frame changes are made based on the synthesis results. Finally, guided by intra-frame salient difference features, optimal fusion operators are hierarchically selected to achieve adaptive fusion. Quantitative and qualitative analyses show that the proposed method significantly outperforms single fusion algorithms in fusion quality.