Hostname: page-component-89b8bd64d-7zcd7 Total loading time: 0 Render date: 2026-05-06T16:55:29.029Z Has data issue: false hasContentIssue false

QuantiVA: quantitative verification of autonomous driving

Published online by Cambridge University Press:  13 December 2024

A response to the following question: How to ensure safety of learning-enabled cyber-physical systems?

Renjue Li
Affiliation:
SKLCS, Institute of Software, CAS, Beijing, China University of Chinese Academy of Sciences, Beijing, China
Tianhang Qin
Affiliation:
SKLCS, Institute of Software, CAS, Beijing, China University of Chinese Academy of Sciences, Beijing, China
Pengfei Yang
Affiliation:
SKLCS, Institute of Software, CAS, Beijing, China University of Chinese Academy of Sciences, Beijing, China
Cheng-Chao Huang*
Affiliation:
Nanjing Institute of Software Technology, CAS, Nanjing, China
Youcheng Sun
Affiliation:
The University of Manchester, Manchester, UK
Lijun Zhang
Affiliation:
SKLCS, Institute of Software, CAS, Beijing, China University of Chinese Academy of Sciences, Beijing, China
*
Corresponding author: Cheng-Chao Huang; Email: chengchao@njis.ac.cn
Rights & Permissions [Opens in a new window]

Abstract

We present a practical verification method for safety analysis of the autonomous driving system (ADS). The main idea is to build a surrogate model that quantitatively depicts the behavior of an ADS in the specified traffic scenario. The safety properties proved in the resulting surrogate model apply to the original ADS with a probabilistic guarantee. Given the complexity of a traffic scenario in autonomous driving, our approach further partitions the parameter space of a traffic scenario for the ADS into safe sub-spaces with varying levels of guarantees and unsafe sub-spaces with confirmed counter-examples. Innovatively, the partitioning is based on a branching algorithm that features explainable AI methods. We demonstrate the utility of the proposed approach by evaluating safety properties on the state-of-the-art ADS Interfuser, with a variety of simulated traffic scenarios, and we show that our approach and existing ADS testing work complement each other. We certify five safe scenarios from the verification results and find out three sneaky behavior discrepancies in Interfuser which can hardly be detected by safety testing approaches.

Information

Type
Results
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2024. Published by Cambridge University Press
Figure 0

Figure 1. The verification framework. We learn a surrogate model and verify the property on it. The surrogate model is iteratively refined by incremental sampling. The whole procedure is recursive by dividing the configuration space.

Figure 1

Figure 2. Scenario (i) Emergency Braking: the ego drives along the road while the leading NPC brakes. The configuration $\theta \in $ consists of several parameters such as NPC’s velocity, deceleration, trigger distance, etc. A function $\omega $ measures the distance between the two vehicles.

Figure 2

Figure 3. We show the fitness function $\rho $ and the learned surrogate model $f$ w.r.t. ${\theta _1}$ (NPC’s initial velocity), by fixing other parameters. Here, $\rho $ is bounded by $f \pm {\lambda ^{\rm{*}}}$ with PAC guarantee. Note that there exist velocity values that makes the lower bound smaller than threshold $\tau $ (at bottom right corner), which violates Equation (6), i.e., the ADS may break the collision-free property.

Figure 3

Algorithm 1 QuantiVA

Figure 4

Figure 4. (ii) Follow Pedestrian: The ego car keeps a safe distance with the pedestrian in front. (iii) Cut-in with Obstacle: An NPC car in front of a van tries to cut into the road where the ego car drives along. (iv) Pedestrian Crossing: A pedestrian crosses the road while the ego car enters the junction. (v) Through Redlight: The ego car encounters a NPC car running the red light when crossing the junctions.

Figure 5

Table 1. The physical value corresponding to the range of parameters in the safety property for each scenario

Figure 6

Figure 5. Verification result for each scenario formed as a tree according to the branching paths. Each $\lambda $ and $\# {\rm{adv}}$ indicate the absolute distance between the surrogate model and the fitness function and the number of the adversarial examples found in such (sub)space, respectively.

Figure 7

Figure 6. The visualization of SHAP values for the surrogate model learned for the whole configuration space in each scenario (i.e., corresponding to the root of each tree in the verification results).

Figure 8

Figure 7. By heatmap, the results of parameter space exploration are illustrated for the Pedestrian Crossing #1. The grid marked with brighter color implies that the ADS is more likely to violate the safety property with the parameters in it.

Figure 9

Table 2. The time consumed of each phase of the verification procedures

Figure 10

Table 3. The testing results for the scenario i and ii, where we show the number of generations, the minimum population fitness and the number of the adversarial examples found in the genetic testing

Figure 11

Figure 8. A comprehensive scenario under more complex traffic situations, which involves various traffic participants and intermittent jitters of the ego vehicle.