Among the controversial aspects (and frankly speaking, there are many) that concern the predictable and pervasive adoption of the self-driving cars in the forthcoming future, there is one that often remains neglected and relegated in the background:
The issue has earned relevance not only because of the news that a Uber’s self-driving car killed a passer-by, in an accident which took place in March 2018, but also and above all in relation to the supposed “intelligence” which would characterize these devices.
In fact, the intelligence allegedly attributed to self-driving cars, could be compared with the human cognitive capacities just in terms of assonance, rather than for the actual ability to predict possible future outcomes.
The use of technologies that delegate tasks and intentionality to machines, once the exclusive prerogative of human beings, inevitably raises the question (and the relative fear) of a possible progressive and generalized irresponsibility induced by technological society.
Ethical issues that cannot be renounced
The introduction of self-driving cars leaves various issues in the background that imply ethical choices which are difficult to implement in automatisms.
Faced with the choice between avoiding an accident and investing a passer-by, what “decision” will a self-driving car take?
Should the responsibilities for the automated choices of vehicles be traced back to the manufacturer of the “intelligent” car, or to the actual owner of the self-driving car, as is currently the case in common road accidents (and regulated by compulsory insurance policies)?
Without resorting to science fiction suggestions (in the style of “Trilogy of the Foundation” by Isaac Asimov), it should be emphasized that the capability to choose does not only imply “rational” issues, which can be emulated by algorithms, or problems of correct estimates and forecasts (for example on which direction of travel to take, or on the correct steering angle and relative acceleration of the vehicle, etc.), tasks in which the computational capabilities of machines have shown to greatly outperform those of human beings.
On the contrary, it is a question of understanding that many of the choices and decisions that we are commonly called to take every day (often without realizing it) inevitably involve ethical evaluations.
The fear is that, in this way, the machines are delegated not only the tasks (whether they are “repetitive” or “predictive”) for which they were designed, but also those choices whose responsibility (like it or not) remains irremediably ours, as conscious and aware human beings.