I. Introduction
Risk regulation has been expanding in European digital policy.Footnote 1 From the regulatory framework of personal data to content and artificial intelligence (AI), the European approach has primarily relied on risk assessment obligations and procedures focusing on mitigating risks to fundamental rights. Particularly, the framework of data protection, platform regulation, and artificial intelligence underlines this focus on constitutional rights, as seen in the Recitals of the General Data Protection Regulation (GDPR),Footnote 2 the Digital Services Act (DSA),Footnote 3 and the Artificial Intelligence Act (AI Act).Footnote 4 The main characteristic bringing together these regulations is the focus on European constitutional values anchored in democracy and the rule of law, as in Article 2 TUE, which serve as the core parameter for assessing and mitigating risks.
However, the approach oriented to constitutional values has not always characterised European risk regulation. Instead, risk regulation in Europe has been more intricately connected to science and precaution as a principle.Footnote 5 Ever since the last century, the rationalisation of risk has been the answer for the unknown landscape that transnational and technological threats pose to social goods. This process was grounded in the quantitative cost-benefit analysis and reinforced by the precautionary principle.Footnote 6 This principle has indeed played a foundational role in modulating European policies within many different fields, from environmental law to food safety.Footnote 7 Rooted in risk assessments of potential harm based on a mix of scientific and statistical data, this principle has been serving as a guiding framework for policy-makers, limiting the uncertainty by trying to rationalise risk.
European digital policy instead seems to follow another path. Rather than being directly connected to ex-ante parameters or scientific risk assessment and knowledge, the European approach to risk regulation in digital policy considers risk not in terms of scientific uncertainty, but rather in terms of the impact on constitutional values, particularly fundamental rights, coming from digital technologies and the imbalance of powers between public and private actors. The lack of a clear precautionary framework in digital regulation is particularly evident when examining landmark legislative instruments in different areas. While elements of the precautionary principle can be identified in the AI Act, as in the case of the prohibited practises,Footnote 8 and the technical threshold for generative AI models with systemic risk,Footnote 9 the overall regulatory approach to risk regulation in European digital policy remains distinct from other fields, particularly where the precautionary principle served as a guide.
This shift of view highlights a fundamental divergence in how risk is conceptualised and regulated across different domains. Digital policy operates in an environment of uncertainty, where risks are often speculative and difficult to quantify,Footnote 10 as well as left to the assessment made by the regulatees such as private actors. The difference indeed can be attributed to the inherent challenges in quantifying and measuring digital risks, particularly in areas such as personal data processing and the dissemination of illegal content. Issues such as algorithmic bias, misinformation, privacy violations, and cybersecurity threats are difficult to measure using traditional scientific methods.Footnote 11 Unlike environmental hazards or food safety risks, which can be assessed through empirical testing and scientific methodologies, the impact of digital risks is often diffuse, context-dependent, and difficult to predict.Footnote 12 In this case, digital policy addresses risks that are often not yet fully understood, leading to regulatory frameworks that prioritise governance mechanisms over science. Likewise, the concept of risk in digital policy is often linked to broader societal and ethical considerations rather than direct scientific evaluation. In the case of AI regulation, for example, concerns about bias, discrimination, and autonomy are central to regulatory discussions,Footnote 13 but they do not lend themselves to the same types of analysis used in science-based knowledge. Instead, they require normative judgements about fairness, accountability and fundamental rights. This has led to the development of regulatory mechanisms, including impact assessments and ethical guidelines,Footnote 14 which aim to manage risks through governance rather than through the scientific identification and mitigation of hazards.
This mechanism, which has acquired central relevance in European digital policy, reflects a broader shift in risk regulation driven by the growing imbalance of power between public and private actors as shaped by the consolidation of information capitalism.Footnote 15 The transfer of regulatory responsibility directly to private actors leads to increase areas of private governance on the effective protection of fundamental rights and democratic values, as underlined by the GDPR, where the scope of obligations is primarily shaped by the data controllers based on the principle of accountability. As public authorities increasingly depend on market actors to implement and enforce regulatory goals, a significant share of regulatory responsibility is transferred to these private actors, who are thereby tasked with ensuring the effective protection of fundamental rights and democratic values through practices of risk assessment and mitigation.
Within this framework, this paper examines how risk regulation in European digital policy follows a different logic compared to the approaches characterising the precautionary principle. What has changed is not the reliance on risk regulation, but the approach to risk whose focus is on constitutional values such as the protection of fundamental rights and democratic values. Unlike the precautionary principle, which tends to be more centred on empirical risk assessment and scientific evidence to rationalise the imminent or possible material hazards and ultimately drive regulation, here, the parameter of risk assessment shifts from hard science to social science, and particularly to constitutionalism. The risk-based approach implemented in European digital policy looks at constitutionalism as science, so making principles such as proportionality and accountability the core of the system, replacing empirical data as the basis for risk assessment. This process has resulted in a regulatory framework where the assessment and mitigation of risks are mediated by legal and institutional considerations rather than scientific principles.
Furthermore, the paper argues that this transformation is part of the European constitutional reactions to the challenges posed by digital technologies to fundamental rights and democratic values and a way to address the imbalance of power between public and private actors in the digital age. This shift has been deeply influenced by the predominance of private actors, particularly large technology firms that dominate the digital ecosystem. The significant control over data, algorithms and digital infrastructures makes them both sources of potential risk and key entities and collaborators for risk mitigation. Rather than adopting a purely science-driven approach, the European approach to risk regulation in digital policy considers constitutional values as the parameter, thus leading risk regulation to become a way for public actors to limit the power imbalance between public and private actors while balancing fundamental rights, democracy and market dynamics shaping digital ecosystems.
This paper first explores the evolution of European risk regulation, particularly looking at the precautionary principle. Second, it analyses the characteristics of risk regulation in European digital policy, examining legislative frameworks such as the GDPR and the DSA, and the AI Act. Third, the paper examines how the evolving regulatory landscape of risk regulation oriented towards constitutional values reflects the broader imbalance between public and private actors in the digital age.
II. The evolution of European risk regulation and precaution
The rising threats of modern societies brought a new focus on understanding and rationalising risks. Risk rationalisation rests on the premise that probability, and, by extension, quantification make hazards amenable to comparable calculation and thereby to “objectively” defensible decision-making.Footnote 16 On this foundation, quantitative risk assessment and cost–benefit analysis have been constructed as tools that transpose the multiplicity of technological impacts into comparable metrics, allegedly providing technical support for regulatory choices.Footnote 17
Yet, by homogenising environmental, health, economic and ethical dimensions, quantitative risk assessment excludes salient qualitative effects and proves inadequate under conditions of uncertainty, where outcomes are known but their probabilities are not, or ignorance, where even the universe of outcomes cannot be delimited.Footnote 18 In such contexts, regulatory science forfeits its supposed neutrality. Its judgments become entwined with political considerations and social values, blurring the traditional divide between risk assessment (facts) and risk management (values).Footnote 19
Precisely because a stable epistemic foundation is unattainable in these circumstances, the precautionary principle emerges as a normative counterweight. Its logic is not to quantify the unquantifiable, but to acknowledge that where causal relations are unknown or incalculable and scope for surprise is wide, public decisions cannot rely solely on quantitative risk assessment techniques. Far from repudiating science, precaution demands a more pluralistic and candid approach to its use. Still grounded in empirical inquiry into the relevant observable reality, yet conscious of its limits and open to extra-scientific forms of knowledge.Footnote 20 Thus, whereas risk rationalisation seeks legitimacy through probabilistic calculation and the promise of objectivity, the precautionary approach shifts the centre of gravity to the governance of uncertainty, privileging prudence, the allocation of the burden of proof, and open political deliberation as means to safeguard the public interest when the “facts” are unstable or incomplete.
Historically, the precautionary principle has served as a tool for policymakers to manage risks predominantly in health, environmental and safety-related domains.Footnote 21 Its implementation has typically involved public policies aimed at limiting economic activities, such as restricting or banning products from circulation.Footnote 22 Risk takes shape through the interplay of diverse realities in a context of uncertainty.Footnote 23 Any account of risk is inseparable from how one conceives the state–society nexus, particularly as that nexus adapts within technology-driven knowledge societies.Footnote 24 At the same time, risk plays several distinct roles in the regulatory sphere: it can be the very focus of regulation, supply the rationale for introducing rules, shape the architecture and processes of regulatory bodies, and underpin the mechanisms through which regulators are held accountable.Footnote 25
An example of this is financial market regulation, where risk is understood as the probability or possibility of events negatively affecting the strategic or economic objectives of financial institutions, particularly through losses, solvency deterioration, or reputational damage. As financial markets grew more complex and globally interconnected during the second half of the twentieth century, risk management became increasingly sophisticated, leading to a segmentation of risks into distinct categories such as credit risk, market risk, operational risk and liquidity risk,Footnote 26 as underlined by recent examples of the purported adoption of the precautionary principle in financial spheres as well as energy transition.Footnote 27
The conceptualisation of the precautionary principle has then been shared upon other spheres of regulation and policy making with the aim of coping with the evolving reality or at least having tools to decipher it. Therefore, this principle evolved into an intellectual and practical guide to respond to natural disasters and imminent environmental hazards in the twentieth century.Footnote 28 Prudence thus became a principle guiding decision-making when the contingent circumstances that surround a situation are uncertain, involving the evaluation of available knowledge to avoid potential harm to the environment, people integrity and other legally protected interests, even in the absence of complete information about the risks involved.
The first legal expression of the precautionary principle appeared in national legal systems,Footnote 29 such as those of Germany and Sweden. Footnote 30 German environmental policy known as Vorsorge (prudence and foresight), and Swedish antipollution legal standards already in 1981 established legitimacy to act before all certainty about a concrete situation was accomplished,Footnote 31 exemplify this evolution. Also, Mary Stevens explains that this approach gained prominence in the international arena, spreading to environmental law, particularly through the North Sea protection conferences. Germany introduced this already settled domestic approach to shed light on the discrepancy of whether to wait for proof of harmful effects before acting. Already in the second conference, the terminology of “precautionary approach” was adopted.Footnote 32
By 1990, the principle was explicitly recognised in supranational fora such as the Bergen Declaration,Footnote 33 and the OSPAR Convention,Footnote 34 where its role in addressing scientific uncertainty was emphasised, allowing preventive actions even in the absence of conclusive evidence. In a broader international context, the principle was formally enshrined at the international level in the 1992 Rio Declaration, adopted at the United Nations Conference on Environment and Development. Its goal was to ensure that scientific uncertainty would no longer be an excuse for inaction when serious or irreversible threats of harm are at stake.Footnote 35 Therefore, this principle, which emerged from international environmental protection legislation,Footnote 36 has since extended to various areas related to safety, such as human, animal and plant health.Footnote 37
Likewise, the precautionary principle also entered the EU legal order around that time. Besides, every step in the evolution of EU regulation was supposed to be a step forward for environmental policy, making sustainable development a core standard for the conclusion of supranational and national agreements.Footnote 38 The precautionary principle has increasingly become a general principle of law,Footnote 39 and a guiding framework for situations where political actions were not based on fully known or “knowable” information,Footnote 40 particularly when decision-making involved scientific uncertainty about the potential risks of harmful events. To unify the criteria for its application, the European Commission approved the Communication on the Precautionary Principle in 2000.Footnote 41 This document outlined the conditions under which the principle could be invoked:
In those specific circumstances where scientific evidence is insufficient, inconclusive or uncertain and there are indications through preliminary objective scientific evaluation that there are reasonable grounds for concern that the potentially dangerous effects on the environmental, human, animal or plant health may be inconsistent with the high level of protection chosen for the Community.Footnote 42
Moreover, the Communication draws a distinction between the enabling situation (scientific uncertainty about a potential harm), the studies corroborating the intuition of potential harm, and the characteristics of the precautionary measures. With regard to the enabling situation, the Commission understands that policy decisions to act or not to act will be made on the basis of the triggers for recourse to the precautionary principles, Footnote 43 particularly the identification of potentially negative effects, the performance of a scientific assessment, and, finally, the existence of scientific uncertainty.Footnote 44 The principle cannot be invoked solely on intuition, but it requires a comprehensive analysis of the specific risk to determine scientific uncertainty.Footnote 45
Furthermore, the Commission identified four components for risk evaluation: hazard identification, hazard characterisation, exposure appraisal, and risk characterisation. Each of these steps, when incomplete, contributes to the overall level of uncertainty and influences the foundation for protective actions. Once the scientific evaluation is as complete as possible,Footnote 46 and the decision is made by assessing the consequences of action, and also of inaction, the precautionary measure must meet the following requirements.Footnote 47 First of all, the measures taken by decision-makers should be proportional to the chosen level of protection in that matter. Secondly, the application of the principle cannot be discriminatory. In third place, despite the lack of information, measures should be consistent with the others already adopted in similar circumstances or using similar approaches. The fourth requirement presupposes examination of the benefits and costs of action and lack of action, acknowledging the economic evaluation but taking into account other methods, such as those concerning efficacy and the socio-economic impact. The fifth component is related to the provisional nature of the principle as it entails that measures should be subject to review in the light of new scientific data in order to maintain or adjust the previous actions. Finally, another characteristic of the precautionary principle applied to real policies is the shift in the burden of proof.
All the above depict the Commission’s effort to systematise this principle and to require exhaustiveness in risk assessment, to the extent possible, in order to avoid justifying arbitrary decisions under the pretext of the precautionary principle. This principle has been a key component of European legal and regulatory frameworks, with courts and policymakers responsible for defining its scope and application for almost half a century. Recent examples of its application include the 2024 regulation on the environmental performance of flights,Footnote 48 and the 2025–2030 Sustainable Fisheries Partnership Agreement,Footnote 49 illustrating the principle’s continued relevance in European policy. However, this approach has raised questions about its impact on innovation and legal certainty,Footnote 50 also leading to inconsistencies in the EU’s judicial application of the principle,Footnote 51 particularly regarding the degree of uncertainty required, the justification for its invocation, and the measures taken. At the same time, one cannot forget that the precautionary principle serves as a performative tool, particularly in science-based situations that rely on context-specific actions by policymakers.Footnote 52
Regulation based on the precautionary principle often exhibits a hybrid nature, combining command-and-control mechanisms with more flexible instruments. On the one hand, some environmental regulations impose clear obligations of result that reflect a strict regulatory approach. For example, Directive 2008/50/EC on ambient air quality and cleaner air for Europe sets binding limit values for pollutants such as nitrogen dioxide (NO2) and particulate matter,Footnote 53 leaving no room for discretion to Member States in certain cases.Footnote 54 This represents a paradigmatic case of a mandatory outcome obligation. On the other hand, the precautionary principle is also operationalised through procedural tools such as Environmental Impact Assessments (EIAs), which embody a more open-ended, risk-assessment logic. Directive 2011/92/EU (as amended by Directive 2014/52/EU) requires the identification, description and evaluation of potential environmental effects of certain public and private projects prior to approval.Footnote 55 However, it only refers to lists of broad environmental factors to be considered, such as biodiversity, soil, water and human health, but does not prescribe specific outcomes.Footnote 56 Instead, it facilitates informed decision-making under uncertainty, aligning with the core logic of precaution.
This orientation to precaution has not found space in digital policy. Although the risks associated with digital technologies are acknowledged and have been at the basis of the adoption of new legal instruments, the precautionary principle is not formally invoked in digital policies implemented by the European Union in the last decade. The GDPR or the DSA refers to risk regulation and assessment, but they do not follow the same approach characterising policies and legislation based on the precautionary principle. Just as the precautionary principle offered answers for a long list of subjects such as health, food safety, industrial management, chemicals, pharmaceuticals, nuclear power plants and medical devices,Footnote 57 it does not offer the same guidance for countering the risks brought about by the digital age, even though it raises concerns about significant damage.
As underlined in the next section, risk regulation in European digital policy does not follow the same logic as the precautionary principle. The approach oriented to fundamental rights and democratic values in digital policy primarily leads to looking at risk assessment with a different focus, moving from scientific evidence to a constitutional-based approach.
III. The emancipation of European digital regulation
Risk regulation has been an instrument to address the challenges of a risk society.Footnote 58 However, the approach to risk has undergone an evolution within the framework of European digital policy. The risk regulatory approach analysed before diverges substantially from the approach applied to digital policies and regulation which not only consider risks but also are “risk-based,” thus reflecting the transformation of normative paradigms in response to the challenges of the digital age.
Risk regulation and a risk-based approach to regulation do not necessarily overlap as concepts. The former looks at risk as the direct object of regulation, with acceptability thresholds and control measures defined in the instrument itself, while the latter considers risk as an assessment and prioritisation criterion that guides compliance and enforcement within an existing framework. In Black’sFootnote 59 account, risk-based approaches structure decision processes (e.g., tailoring obligations to operator risk profiles, managing agency and compliance risks), but they do not, by themselves, settle what counts as an acceptable risk. Put differently, risk regulation is substantive, determining the substantive content of what is to be controlled, while risk-based regulation is methodological and organisational, shaping not the content but the way to address risks.
In the context of digital regulation, the European regulatory approach defines a mix between an attempt to regulate risk and a risk-based approach which acquires not only a political but also an eminently constitutional dimension, since the measures adopted are aimed at preventing negative effects for fundamental rights and democracy. As Grozdanovski points out, referring to the AI Act,Footnote 60 but equally applicable to the GDPR and the DSA, the definition of risk in this sphere is oriented not only towards physical integrity but, above all, towards safeguarding essential liberties. A new model therefore emerges, characterised by three intertwined shifts: who decides, how risk is calculated, and what regulatory outcomes follow.
1. The regulators: from public regulators to private regulatees
In the traditional approach to risk regulation, public authorities, assisted by scientific experts, were responsible for both the evaluation and the management of risk. Two important milestones were risk evaluation (facts) and risk management (values).Footnote 61 Scientists would be responsible for evaluating the piece of reality they had to decipher and ideally submit their neutral findings to decision makers to cautiously manage that information and provide an outcome or policy to the benefit of society.Footnote 62 Hesselink affirmed that this model resulted in two agents: the epistemic authority and the governance authority. While there is controversy over whether they are perfectly separated and the output of the first one is delivered totally neutral,Footnote 63 Hesselink explains that they obtained legitimation from different resources. Scientific findings earn credibility through rigorous methods and peer review, whereas political authority relies on democratic oversight and legal accountability for its legitimacy.Footnote 64 Therefore, the genuine answer to the question posed before regarding milestones or elements added to make digital risk regulation so different relies not on the steps to process risk but on the agents that have the power to do so.
Yet, in the digital sphere, the locus of decision-making migrates decisively toward private actors. When looking directly into European digital regulations, there can be found subjective obligations of protection depending on the risk posed to fundamental rights. For example, the GDPR generally introduces the principle of accountability, de facto delegating data controllers the responsibility to conduct a risk assessment.Footnote 65 One example is the obligation to conduct a data protection impact assessment to measure risks in relation to the processing of personal data. Indeed, when the processing is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data.Footnote 66
Meanwhile, the DSA requires providers of intermediary services to act in a diligent, objective and proportionate manner, taking into account the fundamental rights of the recipients of the service, as enshrined in the Charter.Footnote 67 Likewise, VLOPs should diligently identify, analyse and assess systemic risks arising from the design, functioning, or use of their services. These assessments must be carried out at least once a year or prior to the deployment of functionalities likely to have a critical impact.Footnote 68 The DSA mandates that such assessments focus on risks such as the dissemination of illegal content, negative effects on fundamental rights, including privacy, data protection, freedom of expression, non-discrimination and the rights of the child, as well as on democratic processes, public security, public health and individuals’ physical and mental well-being.
With regard to the AI Act, it establishes different risk assessments applying to the providers of high-risk AI systems and deployers. For instance, the obligation to conduct a Fundamental Rights Impact Assessment prior to deploying high-risk AI systems,Footnote 69 particularly when used in sensitive areas such as justice, migration, democratic processes, or in decisions that significantly affect individuals. This assessment aims to identify the specific risks that the use of the system may pose to individual rights and freedoms, and the measures envisaged to prevent or mitigate those risks.
The reallocation of responsibility is not merely administrative, but it entails a reconfiguration of the relationship between public and private actors in the governance and control of technology. While the delegation of regulatory functions to private actors is not unique to the digital domain, the structure of delegation and oversight in EU digital regulation presents distinctive features. In other areas of risk regulation such as food safety and pharmaceuticals, private actors generate essential data and conduct trials, yet centralised public agencies such as the European Food Safety Authority or the European Medicines Agency retain decisive authority through ex ante approval processes, mandatory data disclosure, and public monitoring. By contrast, digital regulation relies predominantly on accountability and compliance obligations, where risk assessments and transparency reports are largely defined by the regulated entities themselves.
This system represents a different model of risk regulation compared to other areas where regulators have maintained more control. For instance, to ensure safety in the pharmaceutical area, public actors have adopted a paternalistic posture, presuming serious health risks and therefore concentrating power on the market. Meanwhile, in the digital realm, public actors have relied on private actors’ capacity for self-regulation, with public intervention typically occurring only when harm or non-compliance arises. The lack of a central authority combined with the opacity of algorithms, trade secrecy protections and the unprecedented scale of concentrated private power, generates a much sharper informational asymmetry between regulators and firms. Thus, unlike the pharmaceutical or food sectors, where public oversight can independently validate private inputs, digital regulation places regulators in a structurally reactive position, dependent on the cooperation and disclosures of dominant platforms.
Unlike the classical decision-making model, where regulators occupied centre stage, exercising authority buttressed by expert scientific assessments, in the digital-risk arena, technical authority now resides with private entities, as they alone command the underlying datasets and possess privileged knowledge of their systems’ internal operation.
2. The risk calculation: from quantifiable hazards to intangible harms
The European approach to risk regulation in digital policy is also based on a different risk calculation paradigm. While risk has been mostly treated as an empirical and quantifiable prospect of tangible harm, e.g., pollutants are measured in micrograms per cubic metre, nuclear failures in millisieverts, anchored to legal thresholds,Footnote 70 digital risk, by contrast, is intangible and weakly measurable. Data processing, algorithmic bias, and information disorders lack stable metrics comparable to parts-per-million or exposure doses. Measurability becomes the decisive fracture line, for instance, between environmental and digital risk regimes,Footnote 71 where assessments rely less on science and more on constitutional reasoning about rights and democracy. This epistemic indeterminacy legitimises a plural and qualitative appraisal in which legal, ethical and socio-technical perspectives weigh as heavily as statistical models.
Digital risks in European digital regulation are indeed primarily connected to interference with fundamental rights and democracy. While epistemic uncertainty is present across domains, the European approach is marked by high ambiguity such as reasonable disagreement about values and metrics, e.g., “harm” to fundamental rights, and indeterminacy of outcomes emerging from complex socio-technical interactions. More specifically, regulations based on the precautionary principle have historically centred uncertainty around hazard identification and exposure–dose–response relationships, with comparatively greater tractability once measurement regimes stabilise. By contrast, digital risk assessment faces uncertainty, which does not only come from technological opacity but from assessments based on concepts which are connected to social sciences, and, particularly, constitutionalism.
Risk regulation, as inspired by the precautionary principle, has considered risks that, although uncertain, are connected to a physical and quantifiable reality. In contrast, in European digital regulation, the system of governance focuses on the integration of public and private actors concerning a technology whose full beneficial or threatening potential is not only unknown but also based on constitutional parameters which are different than scientific concepts. Whereas in the former, scientific uncertainty is subjected first to processes of evaluation and quantification through empirical data, meaning an assessment conducted by scientists. In the latter, the risks derived from digital technologies, that is, from data processing, online services, or AI, are speculative and often result from social dynamics, without a thorough scientific assessment carried out, difficult to measure, and often left to the assessment of private actors. The sustained absence of a systematic method to quantitatively delimit digital risks contrasts with the methodology of traditional scientific evaluation,Footnote 72 thus forcing regulators to deal with epistemological uncertainty in digital policies, rather than basing regulation on neutral scientific assessment of uncertainty.
The evolution of risk regulation in digital policies reflects a paradigmatic transformation that departs from the scientific and measurable approach of classic European risk regulation to embrace risk management based on constitutional values, reflecting the handling of uncertainty and the adaptation to new power dynamics in digital society. In policies inspired by the precautionary principle, risk has been primarily conceived in terms of an empirical and quantifiable assessment of potential harms and then damages.Footnote 73 Instead, in digital policy, the regulatory approach is not risk-based but constitutional-based, thus emerging not only as a response to the need to manage inherent uncertainties of digital technologies but also the power imbalance and the information asymmetry that characterise contemporary technological environments, particularly between public and private actors. This model does not rely on science and knowledge, but rather on the recognition by public actors that the rapid evolution of digital technologies demands mechanisms to accommodate the unpredictability and dispersion of risks with the predominance of market actors while protecting fundamental rights and democratic values.
3. The regulatory outcome: from precautionary measures to accountability and mitigation
A subtle yet profound difference lies in the very nature of the risk being evaluated in digital policies. Despite applying a similar ex-post evaluation, such as impact assessments or simulations, as in the precautionary principle sphere, the agents and the epistemology of risk have changed, so have the regulatory outputs. Risk regulation influenced by the precautionary principle typically couples command-and-control obligations (e.g., binding emission ceilings) with procedural duties such as environmental impact assessments. The operative logic is preventive. In cases of scientific uncertainty, regulators lean on the side of caution and adopt more stringent preventive measures.
Digital regulations, conversely, privilege accountability and ongoing risk-management. In this regard, the GDPR has set the first paradigmatic example of this transition. Unlike the Data Protection Directive,Footnote 74 the GDPR regulatory framework has approached risk regulation, articulating the notion of accountability as connected with the protection of personal data as a fundamental right.Footnote 75 The GDPR can indeed be considered the first example of how fundamental rights have become an essential normative basis in European digital policy, opening the doors to the spread of a risk-based approach in the DSA and the AI Act. DPIAs under the GDPR, systemic-risk audits under the DSA, and FRIAs under the AI Act are conducted in the phase of development and often shortly before (or even during) deployment. Their aim is dynamic mitigation rather than prior prohibition. Even where obligations are stringent, e.g., annual reassessments, the remedies tend to be adaptive corrections (content moderation tweaks, bias monitoring, transparency reports) rather than outright bans.
This evolution underscores a constitutional turn. Regulation is calibrated to protect fundamental rights amid profound uncertainty and private dominance, rather than enforcing a priori technical ceilings. The result is a governance architecture in which precaution remains one option among many, overshadowed by iterative, rights-oriented risk management that continuously negotiate the fluid power relations of the digital ecosystem. This shift in risk regulation is founded on the imbalanced relationships between public and private actors in the digital age.Footnote 76 While risk regulation founded on precautionary principle has traditionally been aimed at managing risks in areas where public actors could rely on certain knowledge expertise, digital regulation faces the challenge of regulating areas governed by private actors in the algorithmic society.Footnote 77
The imbalance between public and private actors stemming from the structural transformation in how knowledge and control are distributed, has shifted the locus of power to private actors. These actors, often large technology companies, hold exclusive access to critical datasets, algorithmic systems, and the technical know-how that underpin digital infrastructures.Footnote 78 As a result, public regulators frequently find themselves at a disadvantage, lacking the informational and technical capacity to assess or respond to emerging digital risks independently.
This divergence not only evidences the limited scope of the traditional precautionary model to respond to the challenges of governance, but it also implies a reconfiguration of the exercise of power, fundamental rights, and public and economic interests.
IV. Risk regulation and European constitutionalism in the digital age
The transition from a scientific rationality toward a constitutional approach in risk regulation represents, ultimately, a recognition and focus on the political, and not merely technical, nature of the challenges posed by digital technologies. The protection of fundamental rights and the safeguarding of democratic values form the core of risk regulation in the digital age, employing constitutional principles such as proportionality and accountability as normative guides to address the uncertainty of the digital environment.
This transformation can be considered a constitutional reaction. In response to risks to fundamental rights and power asymmetry, the EU has increasingly turned to a risk-based approach grounded in constitutional values as a means of asserting regulatory authority. Rather than relying solely on technical expertise or risk models, they invoke fundamental rights, such as privacy, equality, and freedom of expression, as normative anchors for intervention. This shift reflects an attempt to reestablish public oversight and legitimacy in grounds dominated by private governance. However, because private actors continue to control the architecture and logic of digital services and AI systems, this approach remains constrained, often reactive, and subject to the interpretive boundaries set by the very actors it seeks to regulate.Footnote 79
This situation has made private actors, such as online platforms, an essential tile of the mosaic to ensure the enforcement of public policy objectives. A paradigmatic example is the EU policy against disinformation, which is profoundly based on a process of collaboration and trust between the European Commission, platforms and other stakeholders as manifested by the adoption of the Strengthened Code of Practice on Disinformation.Footnote 80 The underlying logic of this new regulatory paradigm lies in the functional duality that characterises private actors in the digital ecosystem. They are simultaneously generators of digital risk and its main mitigators. This paradox constitutes one of the defining features of contemporary digital regulation. Private entities, by concentrating critical information and controlling technological infrastructure, generate a power asymmetry with respect to public institutions responsible for safeguarding fundamental rights. Simultaneously, however, these same private actors constitute the most effective resource, and, actually, the only one available, to effectively assess and mitigate such risks, given their privileged technical knowledge and operational capacity.
This imbalance has led to the development of a different regulatory approach that aims to manage the interaction and cooperation between public and private actors, and the expansion of risk regulation in digital policy is an example. By adopting a more performance-based approach,Footnote 81 in which responsibility rests with the regulated party, the risk-based approach focuses on the fulfilment of regulatory objectives through discretion and adaptability in their implementation. Digital regulation thus transforms into a fundamentally political process, characterised by constant negotiation between private and public interests. Even the AI Act, which introduces prohibited practices following a top-down approach, combines performance elements with technical standards for high-risk systems,Footnote 82 reflecting the hybrid proposal suggested by Coglianese for contexts in which pure measurement proves complex.Footnote 83
Consequently, instead of demanding concrete results or imposing obligations following a more straightforward command-and-control approach, the European risk-based approach to digital technologies is based on performance criteria and accountability standards based on constitutional values such as fundamental rights and democracy. Although the precautionary approach, in some aspects, also incorporates properties of a performance-based model, the approach to risk regulation in digital policy aims not only to provide greater flexibility in responding to risk scenarios, but also to address the imbalance between public and private actors. As a result, the obligation to assess risks shifts to the bottom, particularly to those actors controlling information and technological infrastructures, which are responsible for assessing and mitigating risks.
At the same time, this process leads public actors to shift from evaluation and management to evaluation and control of a pre-determined set of risks. The DSA, for instance, identifies certain types of risks for VLOPs involving interests such as electoral processes and gender based-violence.Footnote 84 Likewise, the AI Act defines specific areas of risks, also prohibiting certain AI systems.Footnote 85 The differentiation between the actors completing the assessment of digital risk and the institutions in charge of controlling that assessment is at the foundation of the whole digital risk regulation model, which emancipates from the traditional forms of risk regulation. Private and public actors are therefore the two sides of the rope that, obviously with tension, tend to collaborate to strike a balance between public interest and market objectives.
Then, accountability becomes one of the cornerstones of this system. It, therefore, transcends its conception as a mere technical instrument to become a central principle that transforms the regulatory model, establishing a functional bridge between private power and the public safeguarding of constitutional rights. This principle, crystallised in the proactive accountability defined by the GDPR,Footnote 86 radically transforms the relationship between regulators and regulated entities, establishing a continuum of responsibility. Regulatory instruments such as the DSA require VLOPs to provide periodic and detailed reports on content moderation activities and risk management.Footnote 87 Also the AI Act requires impact assessments that allow for anticipating and mitigating potential impacts on fundamental rights.Footnote 88 This change moves the focus from predominantly preventive models toward approaches which are inherently speculative and uncertain nature of digital risks, whose anticipatory identification and quantification is frequently unfeasible, even for private actors.Footnote 89
This approach leads to a principle-based system, where the content of compliance is not shaped by a rigid top-down approach but by the articulation of nuanced and contextual regulatory responses, avoiding both paralysing over-regulation and ineffective under-regulation. In this case, alongside accountability, proportionality becomes central in European digital policy, constituting a transversal gear to establish differentiated obligations and paths of collaboration that could respond to the various levels of risk presented by different actors and technologies in the digital age.
This model of risk regulation presents significant advantages, particularly in terms of adaptability to technological threats and regulatory flexibility, but also raises constitutional challenges. First, this approach entails considerable disadvantages, especially related to the potential impact on fundamental rights that may suffer significant interference before regulatory mechanisms can be effectively enforced. Second, the degree of discretion granted to private actors in compliance and governance may be considered as a threat, particularly considering the dependence of public actors on the risk assessment performed by private actors. Third, if risk regulation based on constitutional values is uncertain, this not only affects the internal market but also the legal certainty in enforcement. However, these challenges have not been addressed by regulators who should carefully reflect on the safeguards and procedures in the relationships between public and private actors, and focus on enforcement in order to avoid arbitrariness and ensure accountability.
This transformation of European risk regulation underlines that the main objective of this approach is not only addressing technological risks but also managing the imbalance of power in the relationship between public and private actors. Rather than managing the risk itself, this approach is the result of a broader constitutional situation of imbalance that requires public actors to collaborate more with private actors in order to achieve public policy objectives.
V. Conclusions
The epistemological uncertainty inherent to digital technologies, derived from both their accelerated evolution and the role of private governance, has revealed the limitations of traditional scientific criteria for addressing technological risk that transcend the merely quantifiable. In Europe, this challenge has led to a paradigmatic shift in the underlying rationality of regulation, leading to a risk-based approach whose aim is to govern epistemological uncertainty through the instruments of constitutionalism, with the goal of protecting fundamental rights and addressing imbalances of power.
These constitutional values have acquired indisputable centrality in the conceptualisation and contemporary regulation of digital risk, transforming fundamental rights into an authentic regulatory paradigm, as a new scientific criterion. In this case, European digital risk regulation has indeed anchored itself in essential constitutional principles such as accountability and proportionality, and, in general terms, a regulatory narrative centred on fundamental rights.
The transformation of the European risk-based approach is a phenomenon that requires a profound rethinking of traditional relationships between regulators and regulatees. In this case, European digital regulation has not focused on establishing prohibitions or static mandates, but on articulating a dynamic process of responsibility to protect fundamental rights and democratic values. Nonetheless, assessing the margin of manoeuvre introduced by the European approach to risk regulation and the collaboration between public and private actors in the definition and mitigation of risks remains a constitutional challenge, and, particularly, it requires not only procedural safeguards but also an effective enforcement of digital regulation.
Competing interests
The authors declare none.