Scientists may be close to implementing a decision-making feature to self-driving cars so they have the ability to make the moral and ethical decisions that human drivers do.

Researchers from the Institute of Cognitive Science at the University of Osnabrück, have used immersive virtual reality to allow scientists to study human behavior in simulated road scenarios, paving the way to teach autonomous vehicles to mimic human ethical decisions on the road.

Participants in the study were asked to drive a car in a typical suburban neighborhood on a foggy day where they experienced unexpected unavoidable dilemma situations with inanimate objects, animals and humans and decided which were to be spared.

The results were conceptualized by statistical models, leading to rules with an associated degree of explanatory power to explain the observed behavior. This showed that moral decisions of unavoidable traffic collisions could be explained and modeled by a single value-of-life for every human, animal or inanimate object.

“Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object,” Leon Sütfeld, first author of the study, said in a statement.

According to the study, previous research has shown that moral judgment and behavior are highly context-dependent and comprehensive and nuanced models of the underlying cognitive processes are out of reach.

The study’s results could help advance the debate regarding the behavior of autonomous vehicles. The German Federal Ministry of Transport and Digital Infrastructure has defined 20 ethical principles related to autonomous vehicles.

For example, under the new German principles, a child running into the road would be classified as significantly involved in creating the risk and less qualified to be saved in comparison to an adult standing on the footpath as a non-involved party.

“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” Prof. Peter König, a senior author of the paper, said in a statement. “Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines should act just like humans.”

The study was published in Frontiers in Behavioral Neuroscience.