Trust in artificial intelligence: From a Foundational Trust Framework to emerging research opportunities

Article
Author:Lukyanenko, Roman, McIntire School of CommerceUniversity of Virginia ORCID icon orcid.org/0000-0001-8125-5918
Abstract:

With the rise of artificial intelligence (AI), the issue of trust in AI emerges as a paramount societal concern. Despite increased attention of researchers, the topic remains fragmented without a common conceptual and theoretical foundation. To facilitate systematic research on this topic, we develop a Foundational Trust Framework to provide a conceptual, theoretical, and methodological foundation for trust research in general. The framework positions trust in general and trust in AI specifically as a problem of interaction among systems and applies systems thinking and general systems theory to trust and trust in AI. The Foundational Trust Framework is then used to gain a deeper understanding of the nature of trust in AI. From doing so, a research agenda emerges that proposes significant questions to facilitate further advances in empirical, theoretical, and design research on trust in AI.

Source Citation:

Lukyanenko R., Maass, W., Storey V. C., (2022), “Trust in artificial intelligence: from a foundational trust framework to emerging research opportunities”. Electronic Markets. pp. 1-60.

Publisher:
University of Virginia
Published Date:
November 19, 2022