This paper adopts a sociosemiotic perspective to examine how normative consensus and legitimacy are constructed in global artificial intelligence (AI) governance discourse. Drawing on a corpus of forty-seven international normative documents, the study identifies an emerging cross-textual consensus around three core principles – Safety, Human-centric and Fairness – and analyses how these are semiotically encoded. The findings reveal tensions between state and non-state actors, and between semiotic agreement and practical implementation. For instance, ‘Safety’ is often framed through securitisation discourse, while ‘Human-centric’ is increasingly grounded in international human rights frameworks. The study further shows that discursive strategies such as nominalisation help establish surface-level consensus but introduce ambiguity that undermines enforceability. By conceptualising governance texts as dynamic semiotic systems, this research moves beyond the hard law–soft law dichotomy, revealing global AI regulation as a contested arena of meaning-making. It offers a theoretical basis for advancing more inclusive and operational governance models.