Reaching Safety at Scale May Involve Some “Rule Breaking”
A foundational aspect of driving, whether human-piloted or autonomous, is following the rules of the road. In ideal scenarios, vehicles obey all mandated rules without conflict. For example, rules of the road dictate a vehicle should always take the right
of way at a green light or they should avoid a lane change across solid lane markings. In real-world driving conditions, however, conflicts frequently arise between rules, particularly in urban environments. A vehicle may be required to stop at a
green light in order to avoid a jaywalking pedestrian or a double-parked truck may necessitate a detour across solid lane markings. In such scenarios, “common practice” rules are followed, while traditional rules have to be deprioritized
to ensure safe and efficient traffic flow.
How does this impact our expectation of autonomous vehicles (AV)? When rules conflict; when safety, common practice, cultural norms or ethical decisions take priority over standard road rules, how should autonomous vehicles behave? Contrary to what we
may want to believe – that AVs must follow a rigid set of rules to be safe – the truth is that just like human drivers, autonomous vehicles must break certain rules to prioritize safety.
At Aptiv, we found the solution lies in using structured artificial intelligence (AI): Encode a systematic, hierarchical relationship between rules, common practices and all manners of driving preferences. In other words, we have to create a prioritization
of traditional, standard rules and “common practice” rules to ensure we maintain the priority of avoiding collision and operate like a safe human driver.
The big picture: More than 90 percent of the almost 1.3 million annual road fatalities worldwide are influenced by human error. Deploying autonomous vehicles that rigorously obey traffic rules could substantially improve road safety. However, merely
adhering to traffic rules is insufficient when faced with the enormous complexity of real-world urban driving.
To achieve a breakthrough in safety, we must create AVs that can interpret rules and preferences in a sophisticated and human-like fashion. The primary responsibility of both human-piloted and autonomous vehicles should be the same: Get to the destination
safely and avoid collision.
What can be done? Structured AI is a rigorous system for mathematically encoding logical descriptions of driving rules and preferences, and the relationships among them. Such a system allows an AV to determine the safest possible action –
even in scenarios where rules of the road must be violated to ensure safety. Structured AI offers the ability to adapt country to country, enabling developers to scale AVs globally without re-writing (or re-training) decision-making systems.
The bottom line: Safe human drivers synthesize their knowledge of traffic rules, safe driving practice, and common sense to make good driving decisions in a wide range of scenarios. To achieve safety at scale, autonomous vehicles must do the same.