Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:06:30
20 Sep 2021

In this work, we propose a new framework, called Geospatial-temporal Convolutional Neural Network (GT-CNN), and construct the video-based geospatial-temporal precipitation dataset from the surveillance cameras of the eight weather stations (sampling points) to recognize the precipitation intensity. GT-CNN has three key modules: (1) Geospatial module, (2) Temporal module, (3) Fusion module. In the geospatial module, we extract the precipitation information from each sampling point simultaneously, and that is used to construct the geospatial relationships using LSTM between various sampling points. In the temporal module, we take 3D convolution to grab the precipitation features with time information, considering a series of precipitation images for each sampling point. Finally, we generate the fusion module to fuse the geospatial and temporal features. We evaluate our framework with three metrics and compare GT-CNN with the state-of-the-art methods using the self-collected dataset. Experimental results demonstrated that our approach surpasses state-of-the-art methods concerning various metrics.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00