Abstract
Generative AI, retrieval-augmented architectures, and multi-source automated analytical tools are now being deployed in increasingly exacting risk-analytic environments. Yet faster processing has not yielded commensurate reductions in false alarms, missed alarms, hallucinated outputs, or failures of responsibility attribution. Against that background, this study develops a biomimetic framework that integrates collective sensing with immune-inspired verification for complex risk-information analysis. Using an openly documented two-layer data architecture that combines authentic public-source samples with rule-generated derivative synthetic samples, the study links biological-to-engineering mechanism translation, multi-objective optimisation, NIST-aligned evaluation, and a governance compatibility index within one auditable design chain. The present evidence indicates that risk level retains a stable positive association with threat scores, while fabricated relations, despite their smaller aggregate volume, are more likely to accumulate in high-risk intervals. These patterns suggest that structural perturbations are more consequential than mere high-frequency noise for judgment distortion. More importantly, the study establishes the empirical and methodological conditions required for formal comparison across recognition quality, system resilience, and governance compatibility. Taken together, the paper offers a testable biomimetic mechanism model and a reproducible evaluative blueprint for auditable optimisation in complex risk-information analysis.
Supplementary weblinks
Title
Collective Vigilance Intelligence: A Research Dataset for a Bionic Collective Perception–Immune Verification Optimization Framework for Complex Risk Information Analysis
Description
This dataset was developed for research on complex risk information analysis under a Swarm Vigilance–Immune Verification framework. It integrates three interlocking layers: real public-source data, rule-generated derived synthetic data for auditable perturbation testing, and governance-audit records for traceability and accountability analysis. The public-source layer draws on GDELT 2.0, CrisisMMD, MAVEN-ERE, and MIRACL, thereby covering event logs, multimodal crisis samples, event-relation structures, and cross-lingual retrieval settings. The derived layer includes prompt injection, semantic perturbation, relation fabrication, and high-noise conflict conditions. The audit layer records source contribution, escalation pathways, verification outcomes, human corrections, final decision rationales, and governance-compatibility dimensions.
Actions
View 


![Author ORCID: We display the ORCID iD icon alongside authors names on our website to acknowledge that the ORCiD has been authenticated when entered by the user. To view the users ORCiD record click the icon. [opens in a new tab]](https://www.cambridge.org/engage/assets/public/coe/logo/orcid.png)