Document Type

Article

Publication Date

2018

DOI

10.3390/s18041051

Publication Title

Sensors

Volume

18

Issue

4

Pages

1051 (17 pages)

Abstract

Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images.

Comments

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland.

Original Publication Citation

Kwan, C., Zhu, X., Gao, F., Chou, B., Perez, D., Li, J., . . . Marchisio, G. (2018). Assessment of spatiotemporal fusion algorithms for planet and worldview images. Sensors, 18(4), 1051. doi:10.3390/s18041051

Share

COinS