Kajo, I. and Kas, M. and Ruichek, Y. and Kamel, N. (2023) Tensor based completion meets adversarial learning: A win�win solution for change detection on unseen videos. Computer Vision and Image Understanding, 226. ISSN 10773142
Full text not available from this repository.Abstract
Foreground segmentation is an essential processing phase in several change detection-based applications. Classical foreground segmentation is highly dependent on the accuracy of the estimated background model and the procedures followed to subtract such model from the original frame. Obtaining good foreground masks via background subtraction remains a challengeable task where limitations such as incomplete foreground objects and foreground misdetection are presented. Due to their recent successes, deep learning approaches have been widely used recently to tackle the challenges related to foreground segmentation. However, recent studies have pointed out the fact that deep learning approaches are highly dependent on the followed training protocol where different protocols lead to clearly different results. Furthermore, several extensive experiments have shown the poor performances of deep learning approaches when processing �unseen videos�. Therefore, in this paper, we introduce a Generative adversarial network (GAN) based foreground enhancement framework that accepts multiple images as inputs. The GAN is designed and trained to refine initial foreground masks estimated via a hand-crafted background subtraction instead of generating them from scratch. The background that is fed into the network is initialized beforehand via a spatiotemporal slice-based singular value decomposition (SVD) and well updated when changes are present in the scene. The segmentation performance is evaluated qualitatively and quantitatively following scene-dependent and scene-independent scenarios, and the estimated results are compared with the existing state-of-the-art methods. From the obtained experimental results, it is evident that the proposed framework shows significant improvement in terms of F-measure and robust performance in the case of unseen scenarios. © 2022 Elsevier Inc.
Item Type: | Article |
---|---|
Impact Factor: | cited By 0 |
Uncontrolled Keywords: | Approximation theory; Change detection; Deep learning; Generative adversarial networks; Image enhancement; Video signal processing, Adversarial learning; Background subtraction; Change detection; Foreground segmentation; Learning approach; Lighting change; Low rank approximations; Stationary foreground; Unseen video; Win-win solutions, Singular value decomposition |
Depositing User: | Mr Ahmad Suhairi Mohamed Lazim |
Date Deposited: | 04 Jan 2023 02:46 |
Last Modified: | 04 Jan 2023 02:46 |
URI: | http://scholars.utp.edu.my/id/eprint/34159 |