Explainable AI: Opening the black box or Pandora’s Box?

Article
Author:Lukyanenko, Roman, McIntire School of CommerceUniversity of Virginia ORCID icon orcid.org/0000-0001-8125-5918
Abstract:

Advances in AI, especially based on machine learning,
have provided a powerful way to extract useful
patterns from large, heterogeneous data sources. The rise in massive amounts of data, coupled with powerful computing capabilities, makes it possible to tackle previously
intractable real-world problems. Medicine, business, government, and science are rapidly automating decisions
and processes using machine learning. Unlike traditional AI approaches based on explicit rules expressing domain knowledge, machine learning often lacks explicit human-understandable specification of the rules producing model outputs. With growing reliance on automated decisions, an overriding concern is understanding the process by which black box” AI techniques make decisions. This is known as the problem of explainable AI. However, opening the black box may lead to unexpected consequences, as when opening Pandora’s Box.

Publisher:
University of Virginia
Published Date:
August 12, 2022