Towards Explainable Recommender Systems for Illiterate Users


Tchappi I., Hulstijn J., Sinyabe Pagou E., Bhattacharya S., Najjar A.


ACM International Conference Proceeding Series, pp. 415-416, 2023


Explainable AI (XAI) has emerged in recent years as a set of techniques to build systems that enable humans to understand the outcomes produced by artificial intelligent entities. Although these initiatives have advanced over the past few years, most approaches focus on explanations that are meant for literate or even skilled end users such as engineers, researchers etc. Few works available in the literature address the needs of illiterate end-users in XAI (illiterate centered design). This paper proposes a generic model to extract the contents of explanations from a given explainable AI system, and translate them into a representation format that illiterate end users may understand. The usefulness of the model is shown by reference to an application of a food recommender system.



Share this page: