May 19, 2024

Researchers Develop New Experiment To Enhance Moral Decision-Making In Autonomous Vehicles

Researchers have devised a novel experiment to garner insights into human moral decision-making related to driving, with the aim of training autonomous vehicles (AVs) to make ethical choices. They seek to gather data to help AVs navigate realistic moral challenges on the road, moving beyond the widely discussed thought experiment known as the “trolley problem”. The study, titled “Moral judgment in realistic traffic scenarios: Moving beyond the trolley paradigm for ethics of autonomous vehicles”, has been published in the open-access journal AI & Society.

According to Dario Cecchini, a postdoctoral researcher at North Carolina State University and the first author of the paper, the trolley problem presents individuals with the dilemma of whether to intentionally cause the death of one person to prevent multiple fatalities.

The trolley problem has become a popular framework for examining ethical decision-making in relation to traffic, Cecchini explains. In its common form, it involves a self-driving car facing a binary choice between hitting a pedestrian crossing the street or swerving into a lethal obstacle.

Nonetheless, trolley-like scenarios are not realistic as drivers encounter various moral decisions in their everyday lives. Should someone exceed the speed limit? Should they run a red light? Should they yield to an ambulance?

These seemingly mundane choices are significant as they can potentially lead to life-or-death situations, asserts Veljko Dubljević, an associate professor at NC State and the corresponding author of the paper.

For instance, if a driver is speeding and runs a red light, they may find themselves having to either swerve into traffic or collide with another vehicle. However, there is a dearth of literature on how moral judgments are made regarding the decisions drivers face in everyday scenarios.

To address this information gap, the researchers devised a series of experiments to collect data on how individuals make moral judgments in low-stake traffic situations. They constructed seven distinct driving scenarios, such as a parent deciding whether to violate a traffic signal to ensure their child arrives at school on time. Each scenario was implemented into a virtual reality setting, allowing participants to experience the audiovisual information of drivers’ actions rather than simply reading about them.

To conduct this research, the team built upon the Agent Deed Consequence (ADC) model, which posits that moral judgments are based on three factors: the agent (the character or intent of the person acting), the deed (what they are doing), and the consequence (the outcome resulting from their actions).

In each traffic scenario, the researchers generated eight versions by modifying the combinations of the agent, deed, and consequence. For instance, in one iteration of the parent scenario, the parent is considerate, stops at a yellow light, and successfully gets the child to school on time.

In another version, the parent is abusive, runs a red light, and causes an accident. The remaining six versions alter the character of the parent (the agent), their decision at the traffic signal (the deed), and/or the outcome of their action (the consequence).

Cecchini explains that the objective is for participants to rate the moral behavior of the driver in each scenario on a scale from 1 to 10. This would yield robust data on the perceived morality in the context of driving, which can subsequently be used to develop AI algorithms for moral decision-making in AVs.

The researchers conducted pilot testing to refine the scenarios and ensure they reflect believable and easily understandable situations.

The next step involves collecting data on a large scale, with thousands of participants engaging in the experiments, according to Dubljević. This data will be utilized to create more interactive experiments, further refining the understanding of moral decision-making. Consequently, algorithms can be developed for use in AVs, followed by additional testing to evaluate their performance.

*Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it