Semi-supervised semantic segmentation based on Generative Adversarial Networks for remote sensing images

1.Key Laboratory for Information Science of Electromagnetic Waves (MoE), Fudan University, Shanghai 200433, China;2.Research Center of Smart Networks and Systems, School of Information Science and Technology, Fudan University, Shanghai 200433, China

Clc Number:


Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments

    Semantic segmentation of very high resolution (VHR) remote sensing images is one of the hot topics in the field of remote sensing image processing. Traditional supervised segmentation methods demand a huge mass of labeled data while the labeling process is very consuming. To solve this problem, a semi-supervised semantic segmentation method for VHR remote sensing images based on Generative Adversarial Networks (GANs) is proposed, and only a few labeled samples are needed to obtain pretty good segmentation results. A fully convolutional auxiliary adversarial network is added to the segmentation network, conducing to keeping the consistency of labels in the segmentation results of VHR remote sensing images. Furthermore, a novel adversarial loss with attention mechanism is proposed in the paper in order to solve the problem of easy sample over-whelming during the updating process of the segmentation network constrained by the discriminator when the segmentation results can confuse the discriminator. The experimental results on ISPRS Vaihingen 2D Semantic Labeling Challenge Dataset show that the proposed method can greatly improve the segmentation accuracy of remote sensing images compared with other state-of-the-art methods.

    Cited by
Get Citation

LIU Yu-Xi, ZHANG Bo, WANG Bin. Semi-supervised semantic segmentation based on Generative Adversarial Networks for remote sensing images[J]. Journal of Infrared and Millimeter Waves,2020,39(4):473~482

Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
  • Received:October 08,2019
  • Revised:April 02,2020
  • Adopted:December 09,2019
  • Online: April 01,2020
  • Published: