Trust in artificial intelligence: From a Foundational Trust Framework to emerging research opportunities
Article
orcid.org/0000-0001-8125-5918With the rise of artificial intelligence (AI), the issue of trust in AI emerges as a paramount societal concern. Despite increased attention of researchers, the topic remains fragmented without a common conceptual and theoretical foundation. To facilitate systematic research on this topic, we develop a Foundational Trust Framework to provide a conceptual, theoretical, and methodological foundation for trust research in general. The framework positions trust in general and trust in AI specifically as a problem of interaction among systems and applies systems thinking and general systems theory to trust and trust in AI. The Foundational Trust Framework is then used to gain a deeper understanding of the nature of trust in AI. From doing so, a research agenda emerges that proposes significant questions to facilitate further advances in empirical, theoretical, and design research on trust in AI.
Lukyanenko R., Maass, W., Storey V. C., (2022), “Trust in artificial intelligence: from a foundational trust framework to emerging research opportunities”. Electronic Markets. pp. 1-60.
University of Virginia
November 19, 2022