After the first death caused by a Tesla autopilot car , the philosophical questions about the responsibility of these autonomous machines become even more pressing. Who is responsible when one of these robots evolving without human control does damage, even kills a man?
Indeed, traditionally, we blame the designers when the machines slip and cause blunders:
” After all ,” says Vocativ , ” researchers have proven in depth that artificial intelligence systems often inherit the biases of their designers, since the statistical models and types of data used to” train “its algorithms dramatically affect how the system makes its choices. “
However, a Yale researcher precisely defends the provocative thesis that there is a responsibility for robots. She asserts that they should be morally responsible for their actions, even even criminally responsible, and therefore punished if necessary for possible clashes inflicted on others.
To demonstrate this, Ying Hu compares robots to companies which, in American law, are considered to be “people” whose actions are often detached from the decisions of the individuals who are part of them. In fact, autonomous robots are constantly learning and adjusting to situations: this creates an internal decision-making structure which evolves over time and which can become so complex that it is impossible to attribute their actions to a design error. or a human decision.
From then on, the robots could effectively make moral judgments, since their errors would be more than their own fact. Men would therefore be able to be the arbiters, to judge whether such action is good or bad:
” If and when we delegate to robots the power to make moral decisions, I assert that it is the duty of men to supervise them. There should be a process to assess the reasoning of the robots and, if the reasoning is wrong, to announce it publicly as such. “
By doing so, it would be possible to identify prohibited and criminal conduct, and create a reporting system that would allow other robots and robot designers to stop repeating them. The possible punishment being for the robot the deactivation or the reprogramming – or the label of “criminal”.
Ying Hu thinks that this problem is likely to arise more and more urgently – in the United States, there are already autonomous cars and robotic security guards. The European Union has also already put forward the idea of qualifying these autonomous robots as ” electronic persons ” with ” specific rights and responsibilities” .
But above all, it should not punish these robots to exonerate their owner.