skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

Vision Transformers for Remote Sensing Image Classification

Remote sensing (Basel, Switzerland), 2021-02, Vol.13 (3), p.516 [Peer Reviewed Journal]

2021. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. ;ISSN: 2072-4292 ;EISSN: 2072-4292 ;DOI: 10.3390/rs13030516

Full text available

Citations Cited by
  • Title:
    Vision Transformers for Remote Sensing Image Classification
  • Author: Bazi, Yakoub ; Bashmal, Laila ; Rahhal, Mohamad M. Al ; Dayil, Reham Al ; Ajlan, Naif Al
  • Subjects: Aircraft ; Artificial neural networks ; Classification ; Convolution ; data augmentation ; Datasets ; Embedding ; Image classification ; image level classification ; Methods ; multihead attention ; Natural language processing ; Neural networks ; Remote sensing ; Satellites ; Semantics ; Transformers ; Unmanned aerial vehicles ; Vision ; vision transformers
  • Is Part Of: Remote sensing (Basel, Switzerland), 2021-02, Vol.13 (3), p.516
  • Description: In this paper, we propose a remote-sensing scene-classification method based on vision transformers. These types of networks, which are now recognized as state-of-the-art models in natural language processing, do not rely on convolution layers as in standard convolutional neural networks (CNNs). Instead, they use multihead attention mechanisms as the main building block to derive long-range contextual relation between pixels in images. In a first step, the images under analysis are divided into patches, then converted to sequence by flattening and embedding. To keep information about the position, embedding position is added to these patches. Then, the resulting sequence is fed to several multihead attention layers for generating the final representation. At the classification stage, the first token sequence is fed to a softmax classification layer. To boost the classification performance, we explore several data augmentation strategies to generate additional data for training. Moreover, we show experimentally that we can compress the network by pruning half of the layers while keeping competing classification accuracies. Experimental results conducted on different remote-sensing image datasets demonstrate the promising capability of the model compared to state-of-the-art methods. Specifically, Vision Transformer obtains an average classification accuracy of 98.49%, 95.86%, 95.56% and 93.83% on Merced, AID, Optimal31 and NWPU datasets, respectively. While the compressed version obtained by removing half of the multihead attention layers yields 97.90%, 94.27%, 95.30% and 93.05%, respectively.
  • Publisher: Basel: MDPI AG
  • Language: English
  • Identifier: ISSN: 2072-4292
    EISSN: 2072-4292
    DOI: 10.3390/rs13030516
  • Source: ROAD: Directory of Open Access Scholarly Resources
    ProQuest Central
    DOAJ Directory of Open Access Journals

Searching Remote Databases, Please Wait