A Dataset for Multimodal Fashion Recommender Model

dc.contributor.authorOrisadare Emmanuel Ayo
dc.date.accessioned2023-09-26T16:34:27Z
dc.date.available2023-09-26T16:34:27Z
dc.date.issued2023-07-28
dc.descriptionThe result of this study is a DMFRM-202k dataset consisting of 202,189 images with respective metadata and 4,697,573 ratings from 3,117,073 users. It also includes a fine-tuned ResNet50 model and other sub-datasets. The primary scope of this dataset is to support the development of multimodal fashion recommendation models. This is probably the first large-scale dataset in the fashion recommendation system community that provides accurately mapped textual and image datasets along with other features such as ratings, image classification, feature vectors, and dataset split into the train, validation, and test sets. The developed dataset is rich and helpful in developing various recommendation models.
dc.description.abstractFashion recommendation systems have gained significant attention in recent years as they provide personalized and non-personalized suggestions to users based on their preferences and past behavior. The effectiveness of these systems largely depends on the availability of relevant and high-quality data, including textual, image, and other forms of data. While there are several existing datasets for fashion recommendation, they often suffer some limitations such as improper image-text mapping, small size, lack of diversity, and data quality issues. To address these limitations, this paper develops a Dataset for Multimodal Fashion Recommender Models (DMFRM-202k). The developed dataset contains an extensive collection of 202,189 fashion product images and their corresponding metadata, including product features and user ratings, preprocessed using several libraries of the Python programming language. Class labeling, feature vectors, and a ResNet50 model that was fine-tuned using transfer learning for selected fashion products are also provided. A multimodal recommender and an image classification model were developed using the DMFRM-202k dataset, the multimodal recommender model achieved an average Precision of 90% and Recall of 90% while the image classification model achieved an Accuracy of 90%, Precision of 91%, and Recall of 89% on the 10th epoch. The dataset can potentially enable researchers to develop more accurate and effective multimodal recommendation models in the fashion domain.
dc.identifier.citationEmmanuel A Orisadare, Idowu J Diyaolu and Iyabo O Awoyelu. Development of a Dataset for Multimodal Fashion Recommender Models. International Journal of Computer Applications 185(22):54-61. DOI: 10.5120/ijca2023922971.
dc.identifier.other10.5120/ijca2023922971
dc.identifier.urihttps://ir.oauife.edu.ng/handle/123456789/6319
dc.language.isoen
dc.publisherDepartment of Computer Science and Engineering - Obafemi Awolowo University
dc.titleA Dataset for Multimodal Fashion Recommender Model
dc.typeDataset
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
rating_data.csv
Size:
137.71 MB
Format:
Comma-Separated Values
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: