The two recent fatal accidents with self-driving cars by Uber and Tesla have not led to the major backlash which many people had predicted. While this does not come as a surprise (the predictions ignored the long history of technical innovations, where accidents have rarely slowed or even halted the advance of a technology), nevertheless, the two harrowing accidents increase the concern of the public and of regulators about the safety of self-driving cars.
Therefore this is the right time to perform a more careful analysis of the risk profile of this technology. As we will show in the following, the specific forms of risk, accident scenarios, and risk mitigation strategies for self-driving cars differ very significantly from other technologies that have been developed over the last centuries. To illustrate the differences, we will examine three key aspects of the risk profile of self-driving car technologies and contrast them with established technologies:
1) One- or two sided distribution of safety outcomes
Self-driving cars are an unusual product from the perspective of safety-related outcomes. Practically every product comes with the risk that it’s use may inflict harm under some circumstances. For most products the safety related outcomes are either harm (negative outcome) or no effect. A much smaller group of products can also lead to positive safety-related outcomes – their use increases safety. A self-driving car will prevent some accidents (positive outcome) or cause accidents (negative outcome); this two-sided distribution of safety outcomes contrasts with other product categories such as microwaves, coffee machines or electric drills which have only one-sided safety outcomes. From one perspective, products with two-sided safety distributions are preferable over products with one-sided distributions. But they present a challenge for risk analysis and for ethical considerations because uncertainty about the distribution of negative outcomes may need to be balanced against the certainty of positive outcomes. Delaying the use of self-driving cars for too long may cause harm (accidents that would not have happened).
In the health sector, this dilemma is a well-known problem for the approval of medical treatments. And the US Food and Drug Administration (FDA) has worked hard to balance both sides of the distribution (both by speeding up the approval process and by enabling critically ill patients to get access to experimental treatments in certain cases). But self-driving cars differ from medical treatments in a very positive way: Whereas the expected positive effects of a treatment often do not materialize (uncertainty on the positive part of the distribution), there is much more certainty about the positive safety outcomes of self-driving cars (accident prevention) and we already have statistical data for the safety benefits of some driver assistance systems.
Thus any legislative effort for regulating the approval of self-driving cars, needs to consider both sides of the distribution of safety outcomes.
2) Alignment of safety goals with development goals
For most products, safety is not an innate part or consequence of the development process. Over the last century we have learned the hard way that a large body of laws and regulations are needed (which then lead to well thought out internal processes) to ensure that safety is adequately addressed in all phases of the development process.
However, the situation is different for self-driving cars. For anyone developing an autonomous vehicle, the primary and overarching development goal of self-driving cars is to be able to operate the vehicle safely at all times. Driving as such is NOT the primary goal, it is a secondary concern because just navigating the car on the road and keeping control of speed and direction is only a very small part of the development problem.
The internal state of the car at any given moment is most important, because the car needs to constantly monitor its environment, identify road signs, traffic lights, predict actions of other traffic participants, etc. Therefore the main concern of development teams is to make sure that the car has a complete and accurate internal representation (of state and probable behavior) of what is going on around it. The key metrics in the development process are not just driving errors but their much earlier cause – shortcomings in sensing, interpretation, prediction. Thus the development of self-driving cars is a constant and intensive search for failures, potential errors, potential flaws. As a consequence, even in the absence of any safety regulations, it would not be possible to develop a self-driving car for the market without being constantly focused on safety. Of course, this is not a guarantee that no mistakes will be made. And this is not a guarantee that the development process will lead to absolutely flawless vehicles (that is not possible). But the technology of self-driving cars is one of only very few technologies where safety issues are inherently the primary focus of development.
3) Efficiency of recall process for defective products
Self-driving cars are almost unique in another, third dimension of risk: For most technologies it is difficult to prevent harm once a defective model is released to the public (and this has important implications for regulation). Once an Espresso machine, a drug or another product reaches the hands of thousands or millions of users it is very difficult to ensure that a defective product model will not lead repeatedly to harm somewhere. Recalls take time and rarely reach all owners. Again, the situation is very different for self-driving cars. They incorporate wireless communication and update mechanisms that allow the near-instant grounding of defective vehicles models. A worst-case scenario where a flaw is discovered after tens of thousands of vehicles have been released to public roads is not realistic: when accidents point to the flaw, the other cars on the road will quickly be grounded and thus further accidents will be prevented from happening. Of course this does not mean that standards for approving self-driving cars should be lax but rather that we should keep the likely risk scenarios in perspective, when we consider regulations for self-driving cars.
In summary, the risk profile of self-driving cars is quite unusual because it is positive on the following three dimensions:
— With self-driving cars, safety is the primary development objective and focus, it is an inherent part of the development process and can never be just an afterthought or constraint of the development process
— Self-driving cars have double-sided safety outcomes: Besides the risk of failure, they also increase the safety of passengers. Keeping self-driving cars off the road for to long because of worries about accidents may be harmful
— Self-driving cars allow instant grounding of defective models; defects can not harm large groups of customers
In the public and regulatory discourse we need to do justice to the unique risk characteristics of self-driving cars!
P.S. For more on self-driving car safety and how (not) to determine statistically whether self-driving cars are safe, see my earlier post on Misconceptions of Self-Driving cars: Misconception 7: To convince us that they are safe, self-driving cars must drive hundreds of millions of miles