As autonomous vehicle technology matures, legislators in several US states, countries and the United Nations are debating changes to the legal framework. Unfortunately one of the core ideas of these legal efforts is untenable and has the potential to cripple the technology’s progress. We show that the idea that drivers should supervise autonomous vehicles is based on false premises and will greatly limit and delay adoption. Given the enormous loss of life in traffic (more than one million persons per year world wide) and the safety potential of the technology, any delay will incur large human costs.
Read the full paper (pdf).
Invalid assumptions about advanced driver assistance systems nearing full autonomy
- The average human driver is capable of supervising such systems
- Humans need to supervise such systems
- A plane’s auto pilot is a useful analogy for such systems
- Driver assistance systems will gradually evolve into fully autonomous systems
Supervising autonomous cars is neither necessary nor possible
The car industry is innovating rapidly with driver assistance systems. Having started with park-assist, lane-departure warning, etc., the latest systems now include emergency braking and even limited autonomous driving in stop-and-go traffic or on the highway (new Daimler S-Class).
As the systems become more capable, the situations will greatly increase where driving decisions are clearly attributable to a car’s software and not directly to the driver. This raises difficult questions of responsibility and liability in the case of accidents. From a legal perspective, the easiest solution is to keep the driver in the loop by positing a relationship between the driver and the car where the car executes the driver’s orders and the driver makes sure that the car only drives autonomously in situations which it is capable of handling. The driver thus becomes the supervisor who is responsible for the actions of the car’s software to which he delegates the task of driving.
Unfortunately this legal solution can not accommodate advanced driver assistance systems which perform the driving tasks for longer periods in urban, country- and highway traffic. We will call these systems auto-drive systems to distinguish them from the current, simpler driver assistance systems which are typically used for narrow tasks and short times.
The legal model rests on the following two invalid assumptions:
1) An average human driver is capable of supervising an auto drive-system
All ergonomic research clearly shows that the human brain is not good at routine supervision tasks. If a car drives autonomously for many miles without incident, a normal human will no longer pay attention. Period! No legal rule can change this fact. The human brain was not built for supervision tasks. In addition the supervision of a car traveling at high speed or in urban settings is very different from supervising a plane which is on auto-pilot (see below).
If the developers of the auto-drive system build and test their car on the assumption that a human actively monitors the car’s behavior at all times because situations may arise that the car can not handle alone, then accidents will happen because some of the drivers won’t be able to react fast enough when such situations occur.
Even if a human could remain alert during the whole drive, the problem remains how the user can distinguish which situations a car is able to handle and which situations it can not handle. How much knowledge will a driver need to have about the car’s capabilities? Once auto-drive systems evolve beyond the current very limited highway and stop-and-go scenarios, and are capable to drive in rain and urban settings, it will become very difficult for the manufacturer to enumerate and concisely describe the situations the car can or can not handle. It will become impossible for the average driver to memorize and effectively distinguish these situations.
2) Humans need to supervise cars operating in auto-drive mode
We saw in the last section that humans can not be relied upon to correct mistakes of a car while driving. But humans might still be needed to ensure that the car does not attempt to drive autonomously in situations that it can not handle well.
However, the car is equipped with a wide array of sensors and continuously assesses its environment. If it’s autonomous capability has limitations, it must be able to detect such situations automatically. Therefore there is no need to burden the driver with the task of determining whether the car is fit for the current situation.
Instead, the car needs to inform the driver when it encounters such a situation and then requests to transfer control back to the driver.
Therefore any non-trivial driver assistance system must be able to inform the driver when it enters situations it can not handle well. There is no need to require that the casual driver be more knowledgeable than the system about its capabilities.
Auto-pilot: the wrong analogy
The most frequently used analogy for a driver-assistance system is the auto-pilot in a plane. Mentally assigning the status of a pilot to the car’s driver who then watches over the auto-drive system may have appeal. But it overlooks the fundamental differences between both contexts: A car driving autonomously differs very much from a plane on auto-pilot. The nature of the tasks and the required reasoning capabilities differ considerably:
a) Physics of motion: A plane moves in 3-dimensional space trough a gas. Its exact movement is hard to formalize and predict and depends on many factors that can not be measured easily (local air currents, water droplets, ice on the wings). A trained pilot may have an intuitive understanding of the movement that is beyond the capabilities of the software. In contrast, a car moves in 2-dimensional space; its movement is well understood, easy to handle mathematically and predict, even in difficult weather (provided speeds are adequate to the weather).
b) Event horizon. Situations that require split-second reactions are very rare while flying; they occur frequently while driving a car. Thus the hand-off and return of control between human and machine is much more manageable in flight than in a car. There are many situations which an auto-drive system must be able to handle in full autonomy because the time is not there to hand off control to the human.
c) Training. The supervision task is the primary job function of a pilot, requires extensive, continual training and has many regulations to ensure alertness. This does not apply and can not realistically be applied to the average driver.
Therefore the relationship between pilot and auto-pilot can not be used as a model for the relationship between driver and driver-assistance system.
Driver assistance systems can not gradually evolve into auto-drive systems
Much of the discussion on the progress of autonomous vehicle technology assumes that driver assistance systems will gradually evolve to auto-drive systems which are capable of driving on all types of roads in all kinds of driving situations. Initially, auto-drive will be available only for a few limited scenarios such as highway driving in good weather. Thereafter more and more capable auto-drive systems will appear until the systems are good enough to drive everywhere in all situations.
Unfortunately, this evolution is not likely. Cars which drive autonomously can not return control to a driver immediately, when they encounter a difficult situation. They must be capable of handling any situation for a considerable time until the driver switches his attention to the driving task and assesses the situation. These cars can not limit themselves to driving in good weather or light rain only – they must be able to handle sudden heavy rain for as long as the driver needs to return to the driving task which for safety reasons must be more than just a few seconds. At realistic speeds these cars may travel a considerable distance in this time. If the car can safely handle this delay, it must probably be able to travel long distances in heavy rain, too.
The same issue applies to traffic situations: While highways may look like an ideal, well structured and relatively easy environment for driving, many complex situations can arise there at short notice which a car on auto-pilot must recognize and deal with correctly. This includes many low-probability events which nevertheless arise from time to time, such as people walking or riding their bicycle on highways. Driving in urban settings is much more complex and therefore a gradual path of auto-drive evolution is even more unlikely in such settings. Thus there maybe some low-hanging fruit for the developers of auto-drive applications (limited highway-driving); but almost all the rest of the fruit is hanging very far up the tree! Systems that are capable of driving in urban/countryside traffic can not start with limited capabilities. From the first day, they must be able to handle a very wide variety of situations that can occur in such settings.
Regulations that harm
We have already shown that the requirement of supervised driving is neither necessary nor can it be fulfilled for advanced driver assistance systems. But one could argue that the requirement does little harm. This is not the case. Wherever this rule is adopted, innovation will be curtailed. The safer and more convenient features of autonomous vehicles will only be available to the affluent and it will take a long time until most of the cars on the road are equipped with such technology. This means many more lives lost in traffic accidents, much less access to individual mobility for large groups of our population without driver’s license (such the elderly and the disabled), more waste of energy, resources, space for mobility.
Any country that adopts such rules will curtail innovation in car-sharing and new forms of urban inter-modal and electric mobility that become possible when autonomous vehicles mature that can drive without passengers.
It is obvious today that legislation that requires drivers to supervise advanced driver assistance systems will not stand the test of time.