The public’s perspective on which decisions autonomous cars should make in fatal situations is being surveyed by the Massachusetts Institute of Technology (MIT).
MIT’s ‘Moral Machine’ poses numerous scenarios to the public in which an autonomous vehicle would need to decide who to kill. Respondents are given two choices, and in each, lives must be lost – there is no non-fatal option. To make each scenario and the victims of each clear, a written explanation is provided, in addition to the graphic demonstration.
Individuals’ answers are then compared to the answer patterns to gauge where their answers fit on a series of scales, depending on different circumstances within the scenarios.
For example, the results compare whether the individual favours young people over the elderly, protecting those upholding the law rather than those flouting the rules (for example, if a pedestrian walks into the road when the crossing light indicates not to cross), or protecting passengers in the autonomous vehicle rather than other road users.
Patterns have already appeared in users’ answers, including strong preferences towards saving the lives of younger people, people with ‘higher social value’. In the given examples, a doctor represents someone with high social value and a bank robber has low social value.
Another strong preference, unsurprisingly, was to save human lives, rather than the lives of pets. A near-50/50 split was reached in users’ preference between saving passengers’ lives, or other potential victims’ lives, as well as protecting physically fit people rather than overweight people.
Sahar Danesh, IET principal policy advisor for transport, said: "The machine will always make the decision it has been programmed to make. It won't have started developing ideas without a database to tap into; with this database decisions can then be made by the machine. With so many people's lives at stake, what should the priority be? It's good to test it in this virtual context, then bring in other industries.
The technology hasn't got as far as decision making software yet, and the regulation surrounding them is not yet in place, which is why these virtual platforms are so important. There has to be a platform and a consultation process before the programming is completed; bring in the insurance industry, legal experts, health professionals and ethical professors to clarify the debate. The more people we can bring together to help make these decisions, the better. Then the algorithms can be made.
Machine errors are always judged more harshly than human errors, so this is a good opportunity to develop the moral criteria that would go into developing autonomous cars. It's good to gather intelligence to teach a machine ethics; human beings make decisions based on instinct, but a machine doesn't know how to do that. We need to gather this data to design programs to help it make decisions that a human would do - or ideally do."
The effectiveness of autonomous technology was called into question earlier this year, after a fatal collision occurred while Tesla's autonomous Autopilot software was activated. The UK government has also held a public consultation on autonomous cars and their future on Britain's roads.
The UK is to host the first autonomous vehicle track day, as autonomous vehicles become more prevalent on road and track.