Enhanced Deep Learning Framework for Fine-Grained Segmentation of Fashion and Apparel

Usmani, U.A. and Happonen, A. and Watada, J. (2022) Enhanced Deep Learning Framework for Fine-Grained Segmentation of Fashion and Apparel. Lecture Notes in Networks and Systems, 507 LN. pp. 29-44.

Full text not available from this repository.
Official URL: https://www.scopus.com/inward/record.uri?eid=2-s2....

Abstract

3D clothing data models have been learned from the real clothing data, but it is difficult to predict the exact segmentation mask of a garment as it varies depending on the size. The accurate segmentation of clothes has become a problem over the last few years due to automatic product detection for enhancing the shopping experience for consumers. The ability to recognize the associated attributes and clothing products will increase the shopping experience for consumers. In the fashion domain, the recent five years literature in computer vision focused on seeking solutions for the recognition of clothes. Still, there has been a gap in the efforts by the fashion designers and computer vision communities. This work proposes a deep learning framework that can learn how to detect and segment clothing objects accurately. We propose a clothing segmentation framework having novel feature extraction and fusion modules. The low-level feature data are extracted by the feature extraction module using Mask Region Convolutional Neural Network (RCNN) segmentation branches and Inception V3 used to extract the high-level semantic data. In contrast, the feature fusion module fuses the two types of image feature data with a new reference vector for each image. Consequently, the feature now includes both high-level and low-level image semantic feature information, boosting our overall segmentation framework�s performance. We use the Polyvore and DeepFashion2 databases for testing our algorithm, since these are the standard datasets used by the current methods for running the simulations. When compared to the current state-of-the-art segmentation methods, our results perform better with an accuracy of 17.3 and AUC of 4. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Item Type: Article
Impact Factor: cited By 0
Depositing User: Mr Ahmad Suhairi Mohamed Lazim
Date Deposited: 12 Sep 2022 08:18
Last Modified: 12 Sep 2022 08:18
URI: http://scholars.utp.edu.my/id/eprint/33737

Actions (login required)

View Item
View Item