twitterlinkedinmail


Among the controversial aspects (and frankly speaking, there are many) that concern the predictable and pervasive adoption of the self-driving cars in the forthcoming future, there is one that often remains neglected and relegated in the background:

Who would be responsible in the event of “unexpected” accidents caused by such automated devices?

The issue has earned relevance not only because of the news that a Uber’s self-driving car killed a passer-by, in an accident which took place in March 2018, but also and above all in relation to the supposed “intelligence” which would characterize these devices.

In fact, the intelligence allegedly attributed to self-driving cars, could be compared with the human cognitive capacities just in terms of assonance, rather than for the actual ability to predict possible future outcomes.

The use of technologies that delegate tasks and intentionality to machines, once the exclusive prerogative of human beings, inevitably raises the question (and the relative fear) of a possible progressive and generalized irresponsibility induced by technological society.

Ethical issues that cannot be renounced

The introduction of self-driving cars leaves various issues in the background that imply ethical choices which are difficult to implement in automatisms.

Faced with the choice between avoiding an accident and investing a passer-by, what “decision” will a self-driving car take?

Who would be responsible for those automated choices?

Is autonomous driving really safer?

One of the most widespread beliefs regarding Artificial Intelligence is that which holds that algorithms are less subject to evaluation errors than humans.

Therefore, even those activities that are commonly considered to be the prerogative of human operators, should also be delegated to machines, in order to reduce human errors.

Among the examples usually adduced by tech-pundits to support this thesis, there is obviously that of self-driving cars.

Even in the face of cases such as that of the run-by of a passer-by by an Uber (semi) autonomous car, which took place in March 2018, and which led to the death of the victim, the advocates of the superiority of algorithms are nonetheless inclined to believe that the blame should be put on human error, rather than on the algorithms.

But is this really the case?

Why autonomous driving algorithms cannot be regarded as less risky

As digital technology experts often like to repeat, we live in a data-driven age.

The increasing availability of large amounts of data (and the ability to process them) has allowed the development of services that were previously unimaginable.

The algorithms have evolved to the point of being able to effectively exploit the growing amount of available data, which permits to develop advanced services ranging from intelligent content search, to the smart identification of the most interesting locations using geolocalization, to the automatic translation of texts in different languages.

As the level of accuracy of the results obtained through such smart services increases day by day, they nevertheless do not always provide optimal results.

Different levels of precision have different consequences in different contexts

When algorithms give the right answer for the wrong reasons

One of the most controversial aspects that characterizes algorithms is the “unreasonable reliability” attributed to them, determined by the undeniable predictive effectiveness associated with the data.

In other words: the trust that is recognized in the predictive capabilities of the algorithms is a direct consequence of the predictive efficacy that can be deduced from the data itself.

The predictive effectiveness of data has inspired the recent “data-driven” decision-making paradigm.

However, this predictive efficacy does not always entitle us to reasonably trust the results obtained from automated “data-driven” procedures.

Predictive effectiveness is not always synonymous with reliability

To realize this, just take into consideration the results obtained using one of the most popular tools that exploits the predictive power of automated learning algorithms in combination with large amounts of data: automatic translators.

It is undeniable that in recent years these tools have improved dramatically, and the automatic translations of a text to and from different languages, obtained using artificial intelligence tools such as Google Translate, are ultimately “mostly” reliable translations.

However, it is in that “mostly” that the devil hides: despite the fact of not being one hundred percent reliable, such translations are supposed to be interpreted by “sentient” human beings, that is, able to understand the “meaning” of such translations.