李鑫伟, 杨甜. 基于密集连接和空谱变换器的双支路高光谱图像分类模型[J]. 红外技术, 2022, 44(11): 1210-1219.
引用本文: 李鑫伟, 杨甜. 基于密集连接和空谱变换器的双支路高光谱图像分类模型[J]. 红外技术, 2022, 44(11): 1210-1219.
LI Xinwei, YANG Tian. Double-Branch DenseNet-Transformer Hyperspectral Image Classification[J]. Infrared Technology , 2022, 44(11): 1210-1219.
Citation: LI Xinwei, YANG Tian. Double-Branch DenseNet-Transformer Hyperspectral Image Classification[J]. Infrared Technology , 2022, 44(11): 1210-1219.

基于密集连接和空谱变换器的双支路高光谱图像分类模型

Double-Branch DenseNet-Transformer Hyperspectral Image Classification

  • 摘要: 为了减少高光谱图像的训练样本,同时得到更好的分类结果,本文提出了一种基于密集连接网络和空谱变换器的双支路深度网络模型。该模型包含两个支路并行提取图像的空谱特征。首先,两支路分别使用3D和2D卷积对子图像的空间信息和光谱信息进行初步提取,然后经过由批归一化、Mish函数和3D卷积组成的密集连接网络进行深度特征提取。接着两支路分别使用光谱变换器和空间变换器以进一步增强网络提取特征的能力。最后两支路的输出特征图进行融合并得到最终的分类结果。模型在Indian Pines、University of Pavia、Salinas Valley和Kennedy Space Center数据集上进行了测试,并与6种现有方法进行了对比。结果表明,在Indian Pines数据集的训练集比例为3%,其他数据集的训练集比例为0.5%的条件下,算法的总体分类精度分别为95.75%、96.75%、95.63%和98.01%,总体性能优于比较的方法。

     

    Abstract: To reduce the training samples of hyperspectral images and obtain better classification results, a double-branch deep network model based on DenseNet and a spatial-spectral transformer was proposed in this study. The model includes two branches for extracting the spatial and spectral features of the images in parallel. First, the spatial and spectral information of the sub-images was initially extracted using 3D convolution in each branch. Then, the deep features were extracted through a DenseNet comprising batch normalization, mish function, and 3D convolution. Next, the two branches utilized the spectral transformer module and spatial transformer module to further enhance the feature extraction ability of the network. Finally, the output characteristic maps of the two branches were fused and the final classification results were obtained. The model was tested on Indian pine, University of Pavia, Salinas Valley, and Kennedy Space Center datasets, and its performance was compared with six types of current methods. The results demonstrate that the overall classification accuracies of our model are 95.75%, 96.75%, 95.63%, and 98.01%, respectively when the proportion of the training set of Indian pines is 3% and the proportion of the training set of the rest is 0.5%. The overall performance was better than that of other methods.

     

/

返回文章
返回