AI Regulation in the European Union: Examining Non-State Actor Preferences

As the development and use of artificial intelligence (AI) continues to grow, policymakers are increasingly grappling with the question of how to regulate this technology. The most far-reaching international initiative is the European Union (EU) AI Act, which aims to establish the first comprehensive, binding framework for regulating AI. In this article, we offer the first systematic analysis of non-state actor preferences toward international regulation of AI, focusing on the case of the EU AI Act. Theoretically, we develop an argument about the regulatory preferences of business actors and other non-state actors under varying conditions of AI sector competitiveness. Empirically, we test these expectations using data from public consultations on European AI regulation. Our findings are threefold. First, all types of non-state actors express concerns about AI and support regulation in some form. Second, there are nonetheless significant differences across actor types, with business actors being less concerned about the downsides of AI and more in favor of lax regulation than other non-state actors. Third, these differences are more pronounced in countries with stronger commercial AI sectors. Our findings shed new light on non-state actor preferences toward AI regulation and point to challenges for policymakers balancing competing interests in society.

As the development and use of artificial intelligence (AI) continues to grow, policymakers are increasingly grappling with the question of how to regulate this technology.While national authorities were the first to initiate regulation of AI, recent years have seen the emergence of a variety of regulatory initiatives at regional and global levels (Council of Europe 2022).This shift reflects a growing realization that AI development often is carried out by companies with transnational activities and creates externalities that do not follow national borders, calling for an international regulatory response.The most far-reaching international effort to regulate the development and use of AI technology is the European Union (EU) AI Act, proposed by the European Commission in 2021 and currently at the concluding stage of negotiation between the Council and the European Parliament.The EU AI Act will introduce a common European regulatory framework encompassing all sectors and all types of AI technology.While the scope of the Act in the first instance is limited to the EU, there is an expectation that the law might become standard setting globally, much like the General Data Protection Regulation (GDPR).
While the EU AI Act formally is negotiated among the EU's institutions, the importance of the Act for future AI development has mobilized large numbers of non-state actors, seeking to influence the terms and conditions of the new regulatory framework.For business actors involved in AI development, the Act will have significant implications for their innovation potentials and competitive positions.For other types of non-state actors, such as nongovernmental organizations (NGOs), research institutes, and labor unions, the Act raises critical questions about the protection of individual rights and public interests.
Whereas the influence of non-state actors on the final EU AI Act remains to be determined, extensive research suggests that such actors tend to exert significant impact on EU legislation (e.g., Dür 2008;Klüver 2013;Dür et al. 2019).Indeed, as a political system, the EU offers multiple channels whereby non-state actors may influence policymakers, ranging from open or closed consultations organized by the European Commission and hearings convened by the European Parliament to informal lobbying of member state and supranational officials.
The purpose of this paper is to offer the first systematic analysis of non-state actor preferences toward international regulation of AI, focusing on the case of the EU AI Act.What are the core concerns and regulatory preferences of non-state actors with respect to European AI regulation?Why do actors differ with regard to these concerns and preferences?Establishing and explaining the preferences of non-state actors toward European regulation of AI is of critical importance.Research on the international governance of AI is still in its infancy (Dafoe 2018;Cihon et al. 2020;Tallberg et al. 2023), and no previous study offers systematic insight into the preferences of those non-state actors that are expected to influence how AI becomes regulated (see Ehret 2022 on public preferences).In addition, knowledge about the preferences of businesses, NGOs, research institutes, labor unions, and other non-state actors is useful to policymakers tasked with balancing competing concerns in society.
Theoretically, we develop an argument about the preferences of non-state actors toward European regulation of AI.We distinguish analytically between two types of non-state actors: business actors, which are driven primarily by for-profit motives, and other types of non-state actors, which are driven by non-profit motives.Identifying innovation versus protection as the core dimension of political conflict on AI regulation, we argue that business actors and other non-state actors are likely to hold systematically different preferences.Compared to other nonstate actors, business actors are less likely to express concerns about AI development and more likely to favor innovation over protection.In addition, we theorize that these differences between business and other non-state actors are conditioned by level of AI uptake in a country, specifically, the strength of the commercial AI sector.
Empirically, we test these expectations using data on non-state actor preferences drawn from the public consultations on European AI regulation organized by the European Commission in 2020, one year prior to the tabling of the EU AI Act.Public consultations offer unique opportunities to identity the regulatory preferences of non-state actors (McKay and Yackee 2007;Bunea 2013).In all, we analyze a sample of 505 submissions by businesses, NGOs, research institutes, and other non-state actors located in the EU.We examine our hypotheses using descriptive and regression analyses of the expressed concerns and regulatory preferences of these non-state actors.
Our core findings are threefold.First, all types of non-state actors express concerns about the implications of AI and support EU regulation involving a variety of mandatory requirements.Second, there are nevertheless significant differences across types of non-state actors, where business actors are less concerned about the downsides of AI and more in favor of lax regulation than other types of non-state actors.Third, the differences between business actors and other non-state actors are more pronounced in countries with stronger commercial AI sectors than in countries with lesser developed AI sectors.In all, these findings suggest that non-state actors generally recognize the need for a common European regulatory framework, but attach systematically varying importance to innovation versus protection depending on actor motives (group type) and competitive position (country).
Our findings have several broader implications for research and policy.To start with, they suggest that the growing field of research on international AI governance, which so far has been focused on states and institutions, would benefit from greater attention to the concerns and preferences of non-state actors.In addition, our findings contribute to research on non-state actors in European and global governance, which so far has focused more on the populations, strategies, and impacts of such actors than on the preferences they advance.Finally, our results highlight the types of political challenges that policymakers confront when developing AI regulation, having to reconcile the competing interests of non-state actors whose support likely is critical for effective and legitimate AI governance.

Theory and hypotheses
We present our theoretical argument in three steps.First, we identify innovation versus protection as the central dimension of conflict in debates over the regulation of AI.Second, we develop our expectations about the regulatory preferences of business actors and other nonstate actors on this dimension of conflict.Third, we explain why we expect the strength of the AI sector in a country to condition the regulatory preferences of non-state actors.
Our argument is anchored in rationalist theories of preferences in economics, philosophy, and political science (Arrow 1952;Frieden 1999;Hansson and Grüne-Yanoff 2022).In this tradition, an actor's preferences are understood as the way it orders possible outcomes on a given issue.Preferences are assumed to be complete (i.e., actors are capable of choosing between two or more outcomes) and transitive (i.e., those choices are internally consistent).
Actor preferences play a key role in multilateral negotiations (Zartman 1994).States come to such negotiations with competing preferences about the most desirable outcomes, and use the negotiations to build coalitions, present arguments, and strike deals in order to attain those preferences (Thomson et al. 2006;Lundgren et al. 2019).Non-state actors, too, hold preferences about the outcomes of multilateral negotiations, and typically work through channels at both domestic and international levels to influence state negotiators (Hanegraaff et al. 2016;Tallberg et al. 2018).
Our argument focuses on the stage of preference formation.Why do non-state actors hold certain preferences with respect to the regulation of AI? Theories of preference formation arrive at preferences in three principal ways: by assumption, observation, or deduction (Frieden 1999).In the first case, preferences result from assumptions that actors strive to attain certain goals, for instance, firms maximizing profit.In the second case, preferences are identified by observing the process leading to an actor pursuing certain goals, for instance, how states define their national interests.In the third case, preferences are established by deducing them from a larger pre-existing theory, for instance, deriving firms' preferences for protection from the assumption of profit maximization and the conditions facing the firm in a given market context.
Our argument is based on the method of deduction.We draw on general theories of nonstate actor preferences to derive expectations about the likely preferences of business actors and other non-state actors toward European regulation of AI.
We develop these preferences in relation to one key dimension of contestation with respect to the regulation of AI: innovation versus protection.Previous analyses of preferences toward EU policymaking suggest that the European political space on different issues is structured by one or several dimensions of political conflict.Examples of such dimensions are left versus right (Hooghe et al. 2002), more versus less integration (Toshkov and Krouwel 2022), fiscal transfer versus fiscal discipline (Lehner and Wasserfallen 2019), and progressive versus conservative values (Lundgren et al. 2022).
With respect to the European regulation of AI, we assume that the key dimension of political conflict pertains to the trade-off between innovation and protection.This dimension captures different preferences with respect to how European regulation of AI should strike the balance between two competing objectives: one the one hand, creating a regulatory environment that promotes innovation in AI development, and one the other hand, introducing regulation that protects the safety, rights, and values of European citizens.
We find ample support for this assumed dimension in debates over the EU AI Act and AI regulation generally.The European Commission's proposal for a regulation (2021: 1) speaks of how the EU's approach needs to deal with "the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of such technology."Member state negotiations in the Council are centered on the trade-off between technological development and risk protection in an effort to strike "a delicate balance" (Council of the EU 2022: 1).And debates in the European Parliament revolve around the competing goals of ensuring an innovation-friendly regulatory environment and safeguarding the rights and interests of European citizens against the risks of AI (Euractiv 2021(Euractiv , 2023)).
The innovation versus protection dimension also characterizes other recent EU legislation on data governance.Examples includes the GDPR, which set precedent for regulating data, as well as the Digital Markets Act (DMA) and the Digital Services Act (DSA), with the goals of increasing European innovation while enhancing individuals' rights over online content.
Our core argument pertains to differences in expected preferences between business actors and other non-state actors with respect to the appropriate balance between innovation and protection.Non-state actors are a broad category and encompass all actors that are not funded by, directed by, or affiliated with a government (Josselin and Wallace 2001;Avant et al. 2010;Tallberg et al. 2013).Both analytically and empirically, non-state actors overlap extensively with interest groups (Beyers 2008;Bloodgood 2011).
For our purposes, the key distinction runs between business actors, on the one hand, and other non-state actors, on the other hand (cf.Dür et al. 2022).Business actors, comprising both individual companies and business associations, are for-profit actors, which we assume are primarily driven by the goal to make money.Generating profit is the over-riding concern of individual companies, while it is an indirect goal of business associations, tasked with protecting the commercial interests of their corporate members.
In contrast, other non-state actors are non-profit actors, which we assume are primarily guided by alternative concerns.NGOs, social movements, and philanthropic foundations are conventionally described as driven by values and principles, even if they are often instrumental in their pursuit of these objectives (Sell and Prakash 2004;Mitchell and Schmitz 2014).creation and diffusion (Haas 1992;Miller 2007).While labor unions similarly to business associations seek to protect the interests of their members, those interests involve concerns that often are in tension with profit maximization, such as decent wages, job security, and good working conditions (Ahlquist 2017).
Building on these assumptions, we expect business and other non-state actors two hold different preferences, on average, when approaching the issue of how AI should best be regulated at the European level.When confronted with the choice between innovation and protection, we would expect that business actors are relatively more in favor of innovation than other non-state actors, which, conversely, are relatively more in favor of protection.
A regulatory environment that favors innovation is likely to be perceived by business actors as more conducive to their commercial interests in AI development.On balance, business actors are likely to prefer more permissive rules, lower bureaucratic hurdles, and fewer regulatory restrictions.In such a regulatory environment, European firms will enjoy greater room for maneuver as they seek to develop AI applications at the international forefront, resulting in a stronger position vis-á-vis competitors in China and the United States.
Other non-state actors are likely to be less enthusiastic about a regulatory environment perceived to favor business innovation over public protection.Instead, NGOs, research institutes, and labor unions are more likely to prefer European rules for AI development and use that prevent undue risks, safeguard the public interest, and ensure respect for fundamental rights, including privacy and non-discrimination.
These expectations translate into two hypotheses about anticipated differences between business actors and other non-state actors in their approaches to European AI regulation, as captured by the innovation versus protection dimension.
The first hypothesis focuses on the concerns expressed with respect to AI technology.
By concerns we mean the worries that actors have with regard to possible negative consequences of AI, such as endangering of safety, breaches of fundamental rights, and discriminatory outcomes.We expect that business actors are less concerned with potential downsides of AI development and use than other non-state actors, since business actors privilege the commercial opportunities offered by AI.
H1: Business actors are less likely to express concerns about AI technology than other actors.
The second hypothesis extends this logic to the regulatory preferences of non-state actors toward European AI regulation.By regulatory preferences we mean the expressed interests of actors with respect to the restrictiveness of rules governing the development and use of AI.We expect that business actors are more in favor of laxer regulation of AI technology in the EU than other non-state actors, since business actors are anxious to ensure an innovationfriendly regulatory environment.
H2: Business actors are more likely to express preferences for laxer regulation of AI than other actors.
We have so far assumed that business actors and other non-state actors operate in identical environments.In practice, however, the uptake of AI varies across European countries (Brookings 2021; Tortoise 2023).Building on basic notions in political economy, we expect such differences to matter for the perspectives of non-state actors on AI development and regulation.Specifically, we anticipate that the strength of the commercial AI sector in a country conditions the concerns and preferences of business actors and other non-state actors in varying but predictable ways.
Business actors in a country with a more developed commercial AI sector are better positioned to benefit from an integrated European AI market than business actors in a country with a less developed sector.Business actors in more developed AI environments have likely benefited from network effects, competitive pressures, and commercial developments that give them an edge when entering a level European playing field.For the same reasons, business actors in less developed AI settings are likely to be worse prepared to compete on an integrated European AI market.
Turning to other non-state actors, we can expect a similar pattern, but driven by other dynamics.In countries with stronger commercial AI sectors, other non-state actors are more likely already to have encountered issues related to protection, making them more attuned to the risks of AI development.In comparison, other non-state actors located in countries with weaker commercial AI sectors are less likely to have experienced the potential downsides of AI development.
Combining these dynamics, we would expect the expressed concerns and regulatory preferences of business and other non-state actors to vary based on the strength of a country's commercial AI sector.By implication of this logic, the gap between concerns and preferences would widen as we move from less to more developed commercial AI environments.
H3.The more developed the commercial AI sector in a country, the greater the differences in expressed concerns between business actors and other non-state actors.
H4.The more developed the commercial AI sector in a country, the greater the differences in regulatory preferences between business actors and other non-state actors.

Data and methods
To test our hypotheses, we identify the regulatory preferences of non-state actors based on responses submitted within the EU public consultation on the White Paper that presented policy and regulatory options for the AI Act.The public consultation was open for submissions between February 20 and June 14, 2020, and the intention was to consult stakeholders with an interest in AI, including AI developers, businesses and business associations, NGOs, public administrations, academic institutions, and private citizens (European Commission 2023).This process was conducted through an online platform where stakeholders could submit their comments and suggestions, both as open-ended answers and closed-form numerical responses to specific questions posed by the EU Commission.We chose to focus on this public consultation for two main reasons.First, throughout the legislative process of drafting and negotiating the EU AI Act, the public consultation on the White Paper was the most comprehensive in scope.Later consultations received fewer submissions.Second, it allows us to investigate broad stakeholder concerns with regard to AI and assess how these concerns are reflected in general regulatory preferences.Later consultation procedures focused more narrowly on specific legislative proposals.
Using public consultation submissions as a source of data on regulatory preferences is a well-established approach in research on non-state actors in the EU (Bunea 2013, Klüver 2011) and other national and international contexts (e.g., McKay and Yackee 2007).As explained by Bunea (2013), EU public consultations represent a formalized dialogue between policy-makers and non-state actors taking place at the policy formulation stage, where lobbying and interest group activity is typically the most intense.For this reason, they constitute a suitable basis for measuring the regulatory preferences of non-state actors.
One possible concern in using EU public consultations as data on regulatory preferences is the risk of bias in stakeholder participation.The EU has different consultation procedures that allow for the involvement of non-state actors that subsequently affect what kind of actors gain access to these procedures (Arras & Beyers 2019), how this relates to the diversity of groups involved (Fraussen 2020) and what value stakeholders may have from participating in various consultation formats (Binderkrantz et al. 2022).While EU institutions seek to ensure that the consultation process is inclusive and transparent, open to a broad range of actors, and do not pose significant resource constraints, the possibility of biased participation cannot be excluded.While some researchers indicate that the Commission has successfully managed to alleviate stakeholder bias (Quittkat and Kotzian 2011;Bunea 2017;Binderkrantz et al. 2020), others have found that participation is skewed in favor of business interests and that bias is accentuated in consultations on policy issues that are non-salient and technically complex (Rasmussen and Carroll 2014;Røed and Hansen 2018).
Since the population of relevant stakeholders in the AI policy domain is unknown, we cannot determine the risk of stakeholder bias in our specific sample.The issue of AI is technical, which may increase bias, but has also been a salient topic of popular and policy discussion, which may alleviate bias.The observed distribution of participation does not suggest grave asymmetries across actor type or geographic locations (see below).The possible exception is the large presence of groups based in Belgium, which is to be expected due to the location of the EU headquarters in Brussels, leading many non-state actors to establish a formal presence there as basis for lobbying activities.While our sample is therefore arguably typical for EU public consultations, we are cautious in generalizing our results beyond the participating organizations.
The public consultation on the EU AI Act received a total of 1,216 contributions.Of these, we exclude 460 responses that lack information on the identity of the stakeholder.Given our theoretical interest, we also exclude responses from non-EU entities (119) and private citizens ( 132), but we report results where the former category is included in our robustness tests.Our final sample includes 505 responses by entities located within the EU, including non-EU entities that report an office or headquarters within the EU, submitted in a variety of languages.1 Tables 1 and 2 show the distribution of responses in the sample across actor type and country.We note that about 40 percent of responses were received from stakeholders that we classify in the business category, which includes both business associations and individual firms, while other groups make up the remaining 60 percent.Poland 10 that the respondent considers that a specific concern is "very important" and 1 as "not important at all".
A second policy issue is formulated as regulatory stringency, which encompasses questions relating to the preferred design of the regulatory provisions of the AI Act, specifically the importance of mandatory requirements regarding the quality of training datasets (F39), the keeping of records and data (F40), information on the purpose of AI systems (F41), robustness and accuracy of AI systems (F42), human oversight (F43), and clear liability and safety rules (F44).Responses to these questions are analogously recorded on a 1-5 scale.
The policy preferences of each respondent to the questionnaire are indicated by the submitted values (1-5) on these dimensions of concern and regulatory design.For example, a response of "5" on question F42 is assumed to indicate a strong policy preference in favor of the EU AI Act including mandatory requirements for human oversight in AI systems.This 2 Table A.1 provides further detail on the questions.
approach to measuring policy preferences is consistent with previous research in non-state actor influence (e.g., Bunea 2013) and EU decision-making (e.g., Lundgren et al. 2019).
In our analyses, policy preferences are reflected in two dependent variables, observed at the level of non-state actor consultation submissions.The first variable measures the level of concerns about AI and is calculated as the unweighted mean of each respondent's submitted scores on the questions pertaining to AI concerns (F25-30).The resulting interval variable can take values between 1 and 5, where lower values correspond to a lower level of general concern about the risks of AI and higher values indicate a higher general concern.The second dependent variable measures regulatory stringency and it is analogously created as the unweighted mean of the responses to questions F39-44, with higher values corresponding to a preference for a more demanding AI regulatory framework and lower values to a preference for a laxer framework.In our robustness checks, we present results where the constituent components (questions) are used as dependent variables.
On the explanatory side, we include a categorical variable to represent actor type, which records the type of the observed non-state actor (see Table 1).To facilitate substantive interpretation, we in some models employ a dichotomous variable, business actor, which takes the value of 1 if the observed actor is a business association or individual firm, and 0 otherwise.
We measure the strength of a country's AI sector based on data from the Global AI index, which benchmarks countries on their level of investment, innovation, and implementation of AI technologies.

Results
We begin our empirical analysis by presenting some descriptive analyses of patterns that emerge when responses are aggregated at the country level.Figure 1 shows the mean value of the two key dependent variables for the submitted actor responses, across the countries in the sample.We make two key observations.First, on both measures, scores are considerably closer to their maximum (5) than the lower end of the scale (1).This indicates that, on average, non-state actors consider concerns about AI as "important" to "very important," and that they correspondingly consider it "important" to "very important" to include a range of mandatory requirements in the EU AI Act.Across all groups and countries, the mean score is 4.3 for concern about AI and 4.5 for regulatory stringency.In other words, non-state actors who participated in the EU's public consultations must be considered as quite worried about the implications of AI and are supportive of relatively demanding regulation.Second, while differences in mean scores across actors from different countries are relatively modest, there are interesting patterns of variation and co-variation.It is clear that actors from some countries, such as Finland, hold views that are considerably more AI-friendly than others, both in terms their views of the risks of the technology and how it should be regulated.In general, actors from countries with low means on concern tend to have lower values on regulatory preferences, and vice versa, suggesting that the level of concern about AI is correlated with regulatory preferences.This is consistent with the interpretation that regulatory preferences are partly a function of the level of concern about AI.
Turning to our regression analysis, Figures 2 through 4 exhibit the principal results in the form of adjusted predictions. 4Our first hypothesis was that business actors would exhibit 4 Full regression tables can be found in the Appendix and Online Appendix (robustness checks).fundamental rights in the areas of policing and immigration control . . .as well the use of AI in sensitive areas, such as the use of public services without adequate democratic oversight, transparency or evidence to justify the need or purpose of its use."These responses illustrate the reasoning that leads actors to weigh AI concerns differently.Whereas the response by the business actor Thales SA emphasizes that the AI Act should recognize the positive utility of AI, the response by PICUM emphasizes how the application of AI raises important concerns.
We find support also for our second hypothesis that business actors will hold preferences for a less demanding regulatory framework on AI. Figure 3 exhibits the predictions based on our regression models.The predicted level of importance of regulatory stringency for business actors is 4.18, suggesting that this type of actor typically favors a laxer regulatory environment for AI than academic (4.54), NGO (4.72), and other (4.76) actors.Indeed, it is noteworthy that nearly all non-business actors are very close to the maximum value on all dimensions of the regulatory framework considered in the questions included in this analysis.While all types of non-state actors see a need for regulation of AI development that is protective of individual rights, transparent, and incorporates human oversight, business actors are relatively more interested in balancing such protection against room for innovation.The contrast between business actors and other actors are again reflected in the qualitative comments submitted during the consultation procedure.For example, the Computer & Communication Industry Association Europe, a business association, stresses that introducing strict liability for AI "would have a chilling effect on innovation, increase development costs and the uptake of AI" whereas Digital Europe, an organization representing the digital industry, argues that the formulation of the AI Act need to "avoid burdensome requirements for companies serving markets across the world."Conversely, many NGO submissions point to the need to for strong oversight and regulation.For example, PICUM's submission argues that compliance with a prospective AI Act "must be evaluated by a trusted external actor, and not on the basis of self-regulation" whereas the All European Trade Union wants to include provisions to "mandate that any machine learning software taking decisions regarding humans and specifically workers or embedded in a safety-critical system be explainable -and prohibit its use if not the case."In general, business actors tend to favor an AI Act with fewer mandatory requirements and a higher degree of self-regulation, whereas nonbusiness actors prefer a more stringent mandatory requirements and stronger and more centralized compliance monitoring.
Thus far, our analysis has concluded that there are distinct differences between the concerns and regulatory preferences of business actors and other groups that participated in the EU's public consultation on the EU AI Act.We now proceed to investigate whether these differences are conditional on country-level characteristics.Recall that our third and fourth hypotheses were formulated to test the propositions that differences between business actors and non-business actors would be accentuated in countries with more developed AI sectors, both regarding concerns about AI (H3) and regulatory preferences (H4).
Our evidence is supportive of both hypotheses.Figure 4 illustrates that the effect of group type on the level of concern about AI (left) and regulatory preferences (right) varies across the range of the underlying variable, the commercial component of the Global AI index.
The substantive effect is non-negligible.Whereas a business actor from a country with the lowest level of commercial AI development (0) would have a predicted level of concern of about 4.1, an actor from a country with the highest level (10) would have a predicted value of about 3.8.For regulatory framework, the same shift corresponds to a reduction of predicted values from 4.4 to 4.1.In other words, consistent with our conjecture, there is a tendency to greater dispersion across business and non-business actors as a country's AI industry develops.
Submissions from actors in countries with less developed commercial AI sectors are more similar to each other than actors from countries with more developed sectors.In sum, we find that our empirical data are supportive of our theoretical propositions reflected in H1-H4.To ascertain that our results are not driven by particularities of modeling, specification, or data choices, we performed three main types of robustness checks.
First, we evaluated whether our results are an artifact of the creation of the indices for concern about AI and regulatory stringency.We estimated separate regression models for each component of the indices, based on each of the constituent questions in the public consultation questionnaire.Tables A .3. and A.4. present the results, demonstrating that our results are not contingent on including or excluding any particular question.Indeed, there is a high degree of similarity of results across each of the question-specific models.
Second, we used alternative approaches to account for the clustered nature of our data.
Tables A.5 in the online appendix present results for a multilevel model with varying intercepts for countries (Gelman and Hill 2006).Table A.6. presents models with country fixed effects.
The results exhibit not substantive deviation from our main results, and we again observe that business actors deviate significantly from other actors both with regard to concern about AI and regulatory preferences.
Third, while we have theoretical reasons to focus on non-state actors from within the EU, we wanted to ascertain that our results are not driven by the exclusion of non-EU responses.
In Table A.7, also in the online appendix, we include responses from non-EU groups.Again, the results are very similar to the EU-only results and we also note (in models 3 and 4) that the difference between business and non-business actors is observed also outside of the EU-based groups.

Conclusion
The EU AI Act will introduce a common European regulatory framework encompassing all sectors and all types of AI technology.Because of its expected far-reaching consequences, the proposed Act has attracted considerable attention from non-state actors trying to influence the terms and conditions of the new framework.In this article, we have offered the first systematic analysis of non-state actor preferences toward international regulation of AI, focusing on the case of the EU AI Act.Theoretically, we have developed an argument about the varying concerns and preferences of business actors and other non-state actors with respect to AI technology and its regulation.Empirically, we have tested our argument using data from the public consultations organized by the European Commission in 2020, conducting descriptive and regression analyses of the expressed concerns and regulatory preferences of non-state actors.
Our principal results are threefold.First, we find that all types of non-state actors express concerns about AI technology and are in favor of regulating its development and use at the European level.Second, as expected, we nonetheless observe significant variation across types of non-state actors, both with regard to expressed concerns and regulatory preferences.Business actors tend to favor laxer regulatory environment compared to other non-state actors, privileging innovation over protection.Third, we find that the strength of the commercial AI sector in a country affects the differences between business actors and other types of non-state actors.In countries where the commercial AI sector is more developed, the differences in concerns and preferences between business actors and other non-state actors become more pronounced.
While this article contributes important new evidence on non-state actor preferences toward AI regulation, we should also note the study's limitations and how future research might address them.For one thing, we have worked with a simplified dichotomy between business Yet, for now, our findings carry three broader implications for research and policy.First, our study adds to the small but swiftly growing field of research on regional and global AI governance (Dafoe 2018;Tallberg et al. 2023).Previous research on AI governance beyond the nation state has tended to focus on mapping the emerging global AI governance regime (Butcher and Beridze 2019;Schmitt 2021), institutional designs for the governance of AI (Cihon et al. 2020), and key principles guiding AI regulation (Jobin et al. 2019).In contrast, this article privileges non-state actors, showing how such actors demand international regulation of AI, but hold varying preferences about the appropriate balance between business innovation and public protection.Second, our findings fill an important gap in the understanding of non-state actor preferences in European and global governance.Systematic examinations of the preferences of non-state actors are considerably less common than inquiries into the populations (Wonka 2010), strategies (Hanegraaf et al. 2016), and impacts (Tallberg et al. 2018) of non-state actors.
We join other recent contributions (Dür et al. 2023) in examining the preferences of non-state actors as they seek to influence the terms and conditions of international regulation.This article further shows how data from public consultations can provide a critical resource for mapping and explaining non-state actor preferences (Bunea 2015).
Finally, our results shed light on the types of interest conflicts that policymakers confront when developing AI regulation.Non-state actor support is likely critical for AI regulation to be effective and legitimate.Our analysis shows that policymakers need to balance the competing concerns and preferences of business actors, on the one hand, and NGOs, research institutes, and labor unions, on the other hand.In addition, it raises important knockon questions about the influence of competing non-state actors on state positions in multilateral negotiations and on international regulatory outcomes.As the most comprehensive regulatory framework worldwide, the EU AI Act presents a scientifically valuable and politically important case for exploring these issues.

Figure 1 .
Figure 1.Mean level of concern about AI (left) and mean level of preferred regulatory

Figure 3 .
Figure 3. Adjusted predictions of group type on regulatory preferences (inclusion of mandatory

Figure 4 .
Figure 4. Adjusted predictions of group type, conditional on national-level AI index scores.
actors and other non-state actors.Future research may contribute more fine-grained analyses of the preferences of different types of business actors, from larger tech corporations to smaller start-ups, and different types of other non-state actors, from NGOs to research institutes and labor unions.Furthermore, future research could seek to broaden the scope of the studied nonstate actors beyond those that participate actively in public consultations.While participation in a consultation procedure is indicative of an interest to influence AI regulation, we cannot exclude that other non-state actors chose other channels for expressing their concerns and preferences.Finally, future research could assess the generalizability of these findings by conducting similar analyses of non-state actor preferences toward AI regulation in other international settings.The Council of Europe, the Organization of Economic Cooperation and Development (OECD), and the United Nations Educational, Scientific and Cultural Organization (UNESCO) are all engaged in developing principles for the development and use of AI technology.

Table 1 . Distribution of actors in sample, by actor type Type Frequency
Note: The academic category includes "Academic/Research institutions"; the business category includes "Company/Business organization" and "Business Association"; the NGO category includes "Consumer organization", "NGO (Non-governmental organization)"; the other category includes "Trade Union", "Public authority", and "Other".

3
We focus on the commercial component of the index, The contrast between business actors and other actors are reflected in the content and orientation of the qualitative submissions to the public consultation.A comment submitted by Thales SA, a large French business actor in the aerospace sector, exemplifies this: "As a general remark concerning this EU consultation, the emphasis seems to be put more on concerns than on opportunities.Highlighting examples of beneficial impact and added-value would be appropriate in order to further foster societal acceptance."The tone changes significantly, when Figure 2. Adjusted predictions of group type on level of concern about AI (1-5).Higher values correspond to a higher concern.Average marginal effects with 95 percent confidence intervals.Calculation based on Model 1 in Table A.2. Standard errors clustered on countries.N=411.turning to a statement by a non-state actor, for instance in the response submitted by the Platform for International Cooperation on Undocumented Migrants (PICUM), an NGO headquartered in Brussels: "We are particularly concerned about the use of AI breaching