Visual preferences prediction for a photo gallery based on image captioning methods
Автор: Kharchevnikova Angelina Sergeevna, Savchenko Andrey Vladimirovich
Журнал: Компьютерная оптика @computer-optics
Рубрика: Обработка изображений, распознавание образов
Статья в выпуске: 4 т.44, 2020 года.
Бесплатный доступ
The paper considers a problem of extracting user preferences based on their photo gallery. We propose a novel approach based on image captioning, i.e., automatic generation of textual descriptions of photos, and their classification. Known image captioning methods based on convolutional and recurrent (Long short-term memory) neural networks are analyzed. We train several models that combine the visual features of a photograph and the outputs of an Long short-term memory block by using Google's Conceptual Captions dataset. We examine application of natural language processing algorithms to transform obtained textual annotations into user preferences. Experimental studies are carried out using Microsoft COCO Captions, Flickr8k and a specially collected dataset reflecting the user’s interests. It is demonstrated that the best quality of preference prediction is achieved using keyword search methods and text summarization from Watson API, which are 8 % more accurate compared to traditional latent Dirichlet allocation. Moreover, descriptions generated by trained neural models are classified 1 - 7 % more accurately when compared to known image captioning models.
User modeling, image processing, image captioning, convolutional neural networks
Короткий адрес: https://sciup.org/140250030
IDR: 140250030 | DOI: 10.18287/2412-6179-CO-678