CAESAR: context-aware explanation based on supervised attention for service recommendations

  • PDF / 1,939,544 Bytes
  • 24 Pages / 439.642 x 666.49 pts Page_size
  • 91 Downloads / 154 Views

DOWNLOAD

REPORT


CAESAR: context-aware explanation based on supervised attention for service recommendations Lei Li1

· Li Chen1 · Ruihai Dong2

Received: 13 April 2020 / Revised: 1 November 2020 / Accepted: 3 November 2020 / © Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract Explainable recommendations have drawn more attention from both academia and industry recently, because they can help users better understand recommendations (i.e., why some particular items are recommended), therefore improving the persuasiveness of the recommender system and users’ satisfaction. However, little work has been done to provide explanations from the angle of a user’s contextual situations (e.g., companion, season, and destination if the recommendation is a hotel). To fill this research gap, we propose a new context-aware recommendation algorithm based on supervised attention mechanism (CAESAR), which particularly matches latent features to explicit contextual features as mined from user-generated reviews for producing context-aware explanations. Experimental results on two large datasets in hotel and restaurant service domains demonstrate that our model improves recommendation performance against the state-of-the-art methods and furthermore is able to return feature-level explanations that can adapt to the target user’s current contexts. Keywords Explainable recommendation · Context-aware recommender systems · Neural network · Multi-task learning

1 Introduction Recommendation algorithms, such as collaborative filtering (Sarwar et al. 2001) and matrix factorization (Mnih and Salakhutdinov 2008), have been widely used in academia and  Lei Li

[email protected] Li Chen [email protected] Ruihai Dong [email protected] 1

Department of Computer Science, Hong Kong Baptist University, Hong Kong, China

2

Insight Centre for Data Analytics, University College Dublin, Dublin, Ireland

Journal of Intelligent Information Systems

industry to return personalized information or service to users. On one hand, in order to provide more accurate recommendations that adapt to the needs of users in different contextual scenarios, context-aware recommender systems (CARS) have been extensively studied (Adomavicius and Tuzhilin 2015; Mei et al. 2018; Li et al. 2019; He and Chua 2017; Xiao et al. 2017; Chen and Chen 2015; Levi et al. 2012). On the other hand, explainable recommendation, which aims to answer why a particular item is recommended, has drawn increasing attention in recent years (Zhang et al. 2014; He et al. 2015; Catherine and Cohen 2017; Lu et al. 2018b; Baral et al. 2018; Chen et al. 2018, 2019; Wang et al. 2018a, b, c; Gao et al. 2019; Wang et al. 2019; Li et al. 2020a, b; Zhang and Chen 2020). As a matter of fact, appropriate explanations can help users make better and faster decisions, increase their trust in the system, and/or convince them to try or buy the recommended item (Tintarev and Masthoff 2015). However, few recommendation models have linked the two branches of work for providing context-aware exp