Understanding AI outputs: study shows pro-western cultural bias in the way AI decisions are explained
==========
A recent systematic review analyzed over 200 studies from the last ten years in which the explanations given by explainable AI (XAI) systems were tested on people. The findings suggest that many existing systems may produce explanations that are primarily tailored to individualist, typically western, populations. Cultural differences in explanations indicate that preferences in explaining behavior are relevant for designing inclusive systems of XAI. However, the research suggests that XAI developers are not sensitive to potential cultural differences in explanation preferences. A striking 93.7% of the studies reviewed did not indicate awareness of cultural variations potentially relevant to designing explainable AI. Moreover, 48.1% of the studies did not report on cultural background at all. Of those that did report on cultural background, 81.3% only sampled western, industrialized, educated, rich, and democratic populations. The oversight of culture in explainable AI research can lead to systems that do not meet the explanatory requirements of other populations, diminishing trust in AI. To address this cultural bias, developers and psychologists should collaborate to test for relevant cultural differences and report the cultural backgrounds of study samples in XAI user study findings.
#Ai #CulturalBias #ExplainableAi #SystematicReview
https://theconversation.com/understanding-ai-outputs-study-shows-pro-western-cultural-bias-in-the-way-ai-decisions-are-explained-227262