Abstract
Achieving the safety of autonomous vehicles (AVs) requires testing methodologies that are rigorous, scalable, and capable of addressing both routine and safety-critical situations. Traditional evaluation through mileage accumulation often fails to expose autonomous driving systems (ADS) to rare but high-impact events. Scenario-based testing has therefore emerged as a key paradigm, enabling the systematic construction of targeted and repeatable test cases. Central to this paradigm is scenario generation, which provides the essential means of populating simulations with realistic, diverse, and safety-relevant traffic situations. This survey presents a taxonomy and critical review of scenario generation methods for autonomous driving. We classify existing approaches into three paradigms—rule-based, data-driven, and learning-based—and analyze their methodologies, representative techniques, and supporting tools such as simulation platforms and scenario description languages. We also summarize evaluation metrics used to measure realism, diversity, and criticality, highlighting trade-offs between interpretability, scalability, and generalizability. Despite rapid progress, several challenges remain. Key issues include bridging the “reality gap” between virtual and real traffic, improving robustness under limited or biased datasets, and effectively modeling rare but safety-critical events. Emerging research directions are also explored, including language-driven generation, hybrid frameworks that integrate symbolic rules with generative learning, and standardized open scenario repositories to enhance reproducibility and regulatory acceptance. By consolidating diverse research efforts into a unified framework, this work provides structured insights for both researchers and practitioners. It aims to support the development of scalable and certifiable testing pipelines that are essential for the safe deployment of autonomous driving technologies.



![Author ORCID: We display the ORCID iD icon alongside authors names on our website to acknowledge that the ORCiD has been authenticated when entered by the user. To view the users ORCiD record click the icon. [opens in a new tab]](https://www.cambridge.org/engage/assets/public/coe/logo/orcid.png)