Fine-tuning large language models as historical text extractors: enhancing sequential recommendation with latent signals

Бесплатный доступ

Sequential recommendation systems aim to predict the next item a user will interact with based on their historical behaviors. Traditional methods often rely on structured IDs or embeddings, which may overlook rich contextual information presented in textual metadata. In this work, we propose fine-tuning large language models (LLMs) as historical text extractors to generate latent signals from user interaction sequences. These signals enhance conventional sequence modeling approaches, improving recommendation accuracy and robustness.

Recommendation system, large language model, data mining, feature engineering

Короткий адрес: https://sciup.org/142245006

IDR: 142245006

Статья научная