Abstract:
Hyperspectral images contain rich spectral spatial information. Fully mining spatial-spectral information for classification is a key research problem. When dealing with hyperspectral image classification, convolution is effective in extracting local features, and the transformer can capture long-range feature dependencies and learn global feature information. A hyperspectral image classification method, combining 3D convolution, spatial channel reconstruction convolution, and Transformers is proposed by leveraging the advantages of convolution and Transformers. First, the image block after dimensionality reduction is used for comprehensive null spectral feature extraction using 3D convolution. Subsequently, spatial channel reconstruction convolution is used to filter the redundant information, and finally, a transformer with dense connectivity is used to establish long-range dependencies on the null spectral features extracted by convolution, classifying them using a multilayer perceptron. In experiments, the method performed well with overall classification accuracies of 99.51%, 99.85%, and 97.57% on the Pavia University, Salinas, and Botswana datasets, respectively.