Ethical Decision Making in Automated Vehicles During Unavoidable Crashes

Poster
Author:Goodall, NoahVirginia Transportation Research Council ORCID icon orcid.org/0000-0002-3576-9886
Abstract:

Automated vehicles have received much attention recently, particularly the Defense Advanced Research Projects Agency Urban Challenge vehicles, Google’s self-driving cars, and various others from auto manufacturers. These vehicles have the potential to reduce crashes and improve roadway efficiency significantly by automating the responsibilities of the driver. Still, automated vehicles are expected to crash occasionally, even when all sensors, vehicle control components, and algorithms function perfectly. If a human driver is unable to take control in time, a computer will be responsible for precrash behavior. Unlike other automated vehicles, such as aircraft, in which every collision is catastrophic, and unlike guided track systems, which can avoid collisions only in one dimension, automated roadway vehicles can predict various crash trajectory alternatives and select a path with the lowest damage or likelihood of collision. In some situations, the preferred path may be ambiguous. The study reported here investigated automated vehicle crashing and concluded the following: (a) automated vehicles would almost certainly crash,
(b) an automated vehicle’s decisions that preceded certain crashes had a moral component, and (c) there was no obvious way to encode complex human morals effectively in software. The paper presents a
three-phase approach to develop ethical crashing algorithms; the approach consists of a rational approach, an artificial intelligence approach, and a natural language requirement. The phases are theoretical and should be implemented as the technology becomes available.

Language:
English
Source Citation:

Transportation Research Board 2nd Annual Workshop on Road Vehicle Automation

Publisher:
University of Virginia
Published Date:
July 16, 2013
Sponsoring Agency:
Virginia Department of Transportation