twitterlinkedinmail


Among the controversial aspects (and frankly speaking, there are many) that concern the predictable and pervasive adoption of the self-driving cars in the forthcoming future, there is one that often remains neglected and relegated in the background:

Who would be responsible in the event of “unexpected” accidents caused by such automated devices?

The issue has earned relevance not only because of the news that a Uber’s self-driving car killed a passer-by, in an accident which took place in March 2018, but also and above all in relation to the supposed “intelligence” which would characterize these devices.

In fact, the intelligence allegedly attributed to self-driving cars, could be compared with the human cognitive capacities just in terms of assonance, rather than for the actual ability to predict possible future outcomes.

The use of technologies that delegate tasks and intentionality to machines, once the exclusive prerogative of human beings, inevitably raises the question (and the relative fear) of a possible progressive and generalized irresponsibility induced by technological society.

Ethical issues that cannot be renounced

The introduction of self-driving cars leaves various issues in the background that imply ethical choices which are difficult to implement in automatisms.

Faced with the choice between avoiding an accident and investing a passer-by, what “decision” will a self-driving car take?

Who would be responsible for those automated choices?

Should the responsibilities for the automated choices of vehicles be traced back to the manufacturer of the “intelligent” car, or to the actual owner of the self-driving car, as is currently the case in common road accidents (and regulated by compulsory insurance policies)?

Without resorting to science fiction suggestions (in the style of “Trilogy of the Foundation” by Isaac Asimov), it should be emphasized that the capability to choose does not only imply “rational” issues, which can be emulated by algorithms, or problems of correct estimates and forecasts (for example on which direction of travel to take, or on the correct steering angle and relative acceleration of the vehicle, etc.), tasks in which the computational capabilities of machines have shown to greatly outperform those of human beings.

On the contrary, it is a question of understanding that many of the choices and decisions that we are commonly called to take every day (often without realizing it) inevitably involve ethical evaluations.

The fear is that, in this way, the machines are delegated not only the tasks (whether they are “repetitive” or “predictive”) for which they were designed, but also those choices whose responsibility (like it or not) remains irremediably ours, as conscious and aware human beings.

Is autonomous driving really safer?

One of the most widespread beliefs regarding Artificial Intelligence is that which holds that algorithms are less subject to evaluation errors than humans.

Therefore, even those activities that are commonly considered to be the prerogative of human operators, should also be delegated to machines, in order to reduce human errors.

Among the examples usually adduced by tech-pundits to support this thesis, there is obviously that of self-driving cars.

Even in the face of cases such as that of the run-by of a passer-by by an Uber (semi) autonomous car, which took place in March 2018, and which led to the death of the victim, the advocates of the superiority of algorithms are nonetheless inclined to believe that the blame should be put on human error, rather than on the algorithms.

But is this really the case?

From a reconstruction of the dynamics of the aforementioned accident, the faults would in fact be attributable to the imprudent behavior of the victim on the one hand, and to the lack of timeliness of the co-driver in taking command of the car, on the other hand.

However, this reconstruction underestimates the negative consequences of the possible presence of “bugs” in the software, as well as the fundamental methodological limits that characterize the algorithms used in autonomous driving.

Uncovering the limits of the algorithms that govern autonomous driving of vehicles also allows us to reveal the limits associated with automated decision-making, in order to prevent potential risks to physical safety.

Why autonomous driving algorithms cannot be regarded as less risky

As digital technology experts often like to repeat, we live in a data-driven age.

The increasing availability of large amounts of data (and the ability to process them) has allowed the development of services that were previously unimaginable.

The algorithms have evolved to the point of being able to effectively exploit the growing amount of available data, which permits to develop advanced services ranging from intelligent content search, to the smart identification of the most interesting locations using geolocalization, to the automatic translation of texts in different languages.

As the level of accuracy of the results obtained through such smart services increases day by day, they nevertheless do not always provide optimal results.

Different levels of precision have different consequences in different contexts

Think of automatic text translation services: in many cases the translation results are amazing, in other cases they may be manifestly inaccurate, especially in the presence of ambiguous content, or difficult to interpret for the machine.

In these cases, we recur to human intuition to try to correctly interpret the results offered by the machine.

The same goes for the results obtained by search engines, which often provide misleading or inappropriate answers.

Therefore, it will be up to the user to further refine the search, in order to obtain results in line with the search expectations, or to adequately filter the results obtained by eliminating those not considered of interest.

As a consequence, the degree of precision obtained from the predictions of the data-driven algorithms will have a different impact depending on the different contexts.

In the case of automatic translations, even an unsatisfactory level of precision will nevertheless be useful for the purposes we set ourselves, such as having a written conversation with a foreign interlocutor.

Not so when our physical safety may depend on the accuracy of the results, as in the case of self-driving cars.

When algorithms give the right answer for the wrong reasons

One of the most controversial aspects that characterizes algorithms is the “unreasonable reliability” attributed to them, determined by the undeniable predictive effectiveness associated with the data.

In other words: the trust that is recognized in the predictive capabilities of the algorithms is a direct consequence of the predictive efficacy that can be deduced from the data itself.

The predictive effectiveness of data has inspired the recent “data-driven” decision-making paradigm.

However, this predictive efficacy does not always entitle us to reasonably trust the results obtained from automated “data-driven” procedures.

Predictive effectiveness is not always synonymous with reliability

To realize this, just take into consideration the results obtained using one of the most popular tools that exploits the predictive power of automated learning algorithms in combination with large amounts of data: automatic translators.

It is undeniable that in recent years these tools have improved dramatically, and the automatic translations of a text to and from different languages, obtained using artificial intelligence tools such as Google Translate, are ultimately “mostly” reliable translations.

However, it is in that “mostly” that the devil hides: despite the fact of not being one hundred percent reliable, such translations are supposed to be interpreted by “sentient” human beings, that is, able to understand the “meaning” of such translations.

The reliability of these translations, in other words, is ultimately delegated to human operators when it comes to solving the most unusual borderline cases, that is, those characterized by the greatest ambiguity.

For the majority of cases, we can safely rely on mechanical rules of derivation of the association rules between different groupings of words, inferred on the basis of the respective estimated probabilities.

In other words, the machine translation mechanism gives us the correct translation most of the times, but for the wrong reasons.

The automatic translator works not because it is actually able to understand the meaning of the text, but because it is possible to establish on a statistics basis a sufficiently reliable one-to-one relationship between the different linguistic representations of the same text.

Except of course in “extreme cases”, such as for example in the case of metaphors, ambiguous or polysemic expressions, etc.

The critical point is that, while in the case of automatic translations any inaccuracies are not likely to jeopardize the physical safety of human beings (even if we could further discuss the case…), it is not so true in the case of inaccuracies in the results of the self-driving vehicle algorithms, which may be seriously harmful.