skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

A Spatial-Temporal Attention-Based Method and a New Dataset for Remote Sensing Image Change Detection

Remote sensing (Basel, Switzerland), 2020-05, Vol.12 (10), p.1662 [Peer Reviewed Journal]

2020. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. ;ISSN: 2072-4292 ;EISSN: 2072-4292 ;DOI: 10.3390/rs12101662

Full text available

Citations Cited by
  • Title:
    A Spatial-Temporal Attention-Based Method and a New Dataset for Remote Sensing Image Change Detection
  • Author: Chen, Hao ; Shi, Zhenwei
  • Subjects: Algorithms ; attention mechanism ; Change detection ; Classification ; Computer applications ; Datasets ; Detection ; Feature extraction ; fully convolutional networks (FCN) ; image change detection ; image change detection dataset ; Image contrast ; Image detection ; Methods ; Modules ; multi-scale ; Neural networks ; Performance enhancement ; Pixels ; Remote sensing ; Spacetime ; spatial–temporal dependency
  • Is Part Of: Remote sensing (Basel, Switzerland), 2020-05, Vol.12 (10), p.1662
  • Description: Remote sensing image change detection (CD) is done to identify desired significant changes between bitemporal images. Given two co-registered images taken at different times, the illumination variations and misregistration errors overwhelm the real object changes. Exploring the relationships among different spatial–temporal pixels may improve the performances of CD methods. In our work, we propose a novel Siamese-based spatial–temporal attention neural network. In contrast to previous methods that separately encode the bitemporal images without referring to any useful spatial–temporal dependency, we design a CD self-attention mechanism to model the spatial–temporal relationships. We integrate a new CD self-attention module in the procedure of feature extraction. Our self-attention module calculates the attention weights between any two pixels at different times and positions and uses them to generate more discriminative features. Considering that the object may have different scales, we partition the image into multi-scale subregions and introduce the self-attention in each subregion. In this way, we could capture spatial–temporal dependencies at various scales, thereby generating better representations to accommodate objects of various sizes. We also introduce a CD dataset LEVIR-CD, which is two orders of magnitude larger than other public datasets of this field. LEVIR-CD consists of a large set of bitemporal Google Earth images, with 637 image pairs (1024 × 1024) and over 31 k independently labeled change instances. Our proposed attention module improves the F1-score of our baseline model from 83.9 to 87.3 with acceptable computational overhead. Experimental results on a public remote sensing image CD dataset show our method outperforms several other state-of-the-art methods.
  • Publisher: Basel: MDPI AG
  • Language: English
  • Identifier: ISSN: 2072-4292
    EISSN: 2072-4292
    DOI: 10.3390/rs12101662
  • Source: AUTh Library subscriptions: ProQuest Central
    ROAD: Directory of Open Access Scholarly Resources
    DOAJ Directory of Open Access Journals

Searching Remote Databases, Please Wait