Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-5nwft Total loading time: 0 Render date: 2024-05-13T21:08:10.548Z Has data issue: false hasContentIssue false

Conclusion

How Liberal-Democratic Governments Can Act Now

Published online by Cambridge University Press:  20 July 2023

Paul Gowder
Affiliation:
Northwestern University, Illinois

Summary

The Conclusion to the Networked Leviathan calls on political states to act to enable the governance reforms proposed by this book, and to impose additional governance reforms on platform companies. Governments have particular leverage over platform companies via the threat of antitrust regulation, their capacity to bring it about that workers (and particularly offshore contract workers, such as social media content moderators) have more influence over corporate outcomes, and promoting the public disclosure of information about company operations. They should use that leverage. Finally, the responsiblity to protect framework from international human rights law should be applied directly to platform companies.

Type
Chapter
Information
The Networked Leviathan
For Democratic Platforms
, pp. 201 - 210
Publisher: Cambridge University Press
Print publication year: 2023
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

In order to conclude this volume, I turn back to states, and particularly the United States and European Union, which are the democracies with the most regulatory leverage over the major platforms. Even in the absence of a more comprehensive program like the model described in Chapter 6, there are interventions we, in our democracies, can make over platforms to improve their capacity to govern and mitigate the harms that they inflict on us and on the rest of the world.

Of course, we must take care with any such interventions to minimize their bias in favor of the interests of powerful states and their economies, as well as their ideological and cultural predilections, even while recognizing that they must be sufficiently compatible with those interests and predilections to be realistic as proposals that could actually be enacted. There is an inherent danger of colonialism and imperialism (as discussed in Chapter 2) in regulations originating from powerful countries in the north.

In addition, the United States and European Union must avoid a catalog of familiar dangers of industrial regulation. First, they must avoid creating regulatory moats – that is, imposing rules on companies that are so burdensome that larger and more established companies can use their relative capacity to comply with them as a competitive advantage. This is why the model cross-company council system articulated in the Chapter 6 was designed to (a) have its own staffing rather than requiring a thick interface with company personnel, (b) be paid for by a progressive-ish tax and transfer system that imposes higher financial burdens on the richest companies, and (c) be optional for smaller companies, who may choose to participate as it becomes compatible with their competitive position.

Similarly, our governments must avoid what we might call multistakeholder capture: the creation of multistakeholder governing bodies that are vulnerable to domination by companies due to the costs of participating in regulatory processes. We can see this as a kind of regulatory moat, or at least a member of the same family, insofar as both failure conditions for regulation go wrong because they miss that many regulations rebound to the advantage of larger, wealthier, and more entrenched market participants. Multistakeholder capture has been observed in other internet governance contexts, such as ICANN and IANA (see, e.g., Cohen Reference Cohen2019, 230–31).Footnote 1

The final major danger to guard against is partisan capture. In both powerful and less powerful countries, politicians have frequently attempted to use regulation or the threat thereof to coerce powerful platforms to act in their own partisan interests. This threat is why I did not list India as a core potential regulator along with the United States and the European Union: while it is a democracy with a huge market share, the Modi regime currently in power has vigorously abused platforms on its own account as well as attempting to bully them with partisan regulation (Horwitz and Purnell Reference Horwitz and Purnell2021; Pahwa Reference Pahwa2021). In view of the similar misconduct of US politicians as described in Chapter 4 and its well-known extreme polarization, it may be that the United States should be removed from the list as well – in all honesty, I only left it in there because, as a US citizen, I find myself perhaps a bit biased by hope that our own toxic politics can still be fixable. Moreover, because many of the biggest companies have their core operations in the United States, it has greater regulatory leverage than anyone else. But in view of American political dysfunction, it may be that the European Union is the last credible regulator standing.

Interventions on the Platform Workplace

One of the implications of Chapter 4 is that direct intervention on the structure of internal company decision-making can actually help companies make and stick to their decisions; a broader conclusion of the entire book is that more diverse and representative decisionmakers are likely to do a better job at preventing governance externalities. There are several legislative options that could make immediate improvements in company decision-making processes.

First, a well-understood problem within the industry is the phenomenon of the wrong people being put in charge of platform rulemaking and enforcement, like lobbyists (e.g., Wofford Reference Wofford2022) and customer account managers (e.g., Caplan and Gillespie Reference Caplan and Gillespie2020). This one’s a relatively easy fix: governments could simply outlaw this sort of dysfunctional corporate organization in the same way that they have passed laws providing for the independence of financial auditors in publicly traded companies. For example, the US government could bar anyone who has ever been a registered lobbyist or supervises a registered lobbyist from exercising decision-making authority over content policy or the design and ranking of recommender algorithms in a platform company.Footnote 2

Similarly, the inclusion of workers in those decisions is likely to both check the untrammeled power of executives and, to the extent workers (particularly workers in developing countries) are more diverse and have superior access to relevant kinds of knowledge, also improve company decision-making. Accordingly, governments (either the US or Europe) could also mandate worker representation in decisions over content policy and the design and ranking of recommender algorithms. There are a number of possibilities for the degree of worker representation, but one possibility could borrow (much like Chapter 6) from American administrative law and require policy changes to be subjected to a kind of notice and comment process, which would permit workers to formally weigh in on the anticipated impact of such changes and require a reasoned response from the executives making those changes. Workers could also be permitted to make policy suggestions that require a reasoned response. Finally, workers could be permitted, perhaps with a supermajority vote, to make the dialogues described above public. It would be critical in this context for “workers” to be defined in a way that includes employees of contractors carrying out company functions, such as content moderators in social media companies.

With respect to US law, one challenge to such interventions would be the First Amendment. Ongoing debate and even litigation about efforts (including by Florida and Texas) to restrict platform content moderation have revolved around the claim that such laws violate company free speech rights, as the design choices for, for example, the feeds of Facebook and Twitter are expressive.Footnote 3 To the extent that these arguments are ultimately accepted, the First Amendment may impede regulations relating to content moderation or the decisional process around content moderation (just as the US government would be unlikely to succeed in trying to give newspaper workers a mandatory voice in high-level editorial decisions.) Moreover, precedents such as Buckley v. Valeo have suggested both that corporate political activity (focusing on spending, but plausibly implying the inclusion of the activities of lobbyists in policy hierarchies in companies) is protected by the First Amendment and that efforts to protect an overall discursive environment cannot count as compelling interests justifying the restriction of speech.

However, there may be ways around this problem. With respect to the prohibition of lobbyist involvement in content policy, arguably the goal of such legislation would be as a prophylactic measure to protect company free speech rights by insulating them from short-term government pressure. An aggressive constitutional lawyer might thus defend it as a kind of First Amendment paternalism, which ultimately improves the expressive capacity not – as with the speech ecosystem arguments rejected in Buckley – of third parties but of the regulated entities themselves.Footnote 4

Assuming that novel and aggressive First Amendment argument doesn’t work, two further solutions suggest themselves. One is simply for the European Union (with substantially less self-defeating free speech absolutism) to do the job. Another is to expand the scope of workplace interventions in ways that are not targeted to speech.

For example, consider a law applying to all companies with multinational operations and more than some large number of customers (say a million, or five million, or whatever). Each company over that size must have an elected workers’ council in each country in which they employ more than a certain number of workers (where “workers” for purposes of eligibility to vote, eligibility for election, and determination of which countries are covered includes employees of contractors).Footnote 5 Such a workers’ council could have the power to demand notice-and-comment style input along the lines described above in any decision which they believe to have high social impact in their home country. Such a regulation would essentially be a general human rights and workplace democracy (e.g., Ferreras and Richmond Mouillot Reference Ferreras and Richmond Mouillot2017) provision, which would not be limited to company speech or to social impact that relates to the discursive ecosystem. Indeed, it might be more broadly useful in, for example, environmental protection. Accordingly, it would be unlikely to be subject to significant First Amendment challenge.

Regulators can also directly attack the pathologies of the contract worker system of content moderation and similar functions by requiring companies to more closely integrate contract workers with their regular workforces. For example, regulators could specify a maximum number of reporting layers between any contract worker and a manager directly employed by the home company. In addition to the informational benefits of such a proposal, it might also generate moral benefits in terms of promoting respectful treatment of contract workers – at least bringing them closer to main company personnel would force the latter to confront the inequitable ways that contract workers are treated.

Interventions on Platform Information

One potential barrier to effective participatory governance or external checks on companies is the near-monopoly companies hold over information about their own internal rules, enforcement processes, and the overall scale of problems. And while many companies voluntarily publish their rules (or some portion thereof) and/or “transparency reports” and other disclosures, those publications are limited to information consistent with self-perceived (and sometimes short-term) company interests.

Making matters worse, contractual terms used by companies to control competitive advantage in information, such as nondisclosure agreements for employees, potentially impede both the public disclosure of the scale of company problems and cross-pollination of knowledge about the sorts of problems confronted and the techniques of solution between companies, to civil society, and to any novel organizations created in accordance with suggestions like those in Chapter 6. Moreover, these kinds of confidentiality requirements likely bias public disclosure – when confidential information does come out, it tends to be in the form of “leaks” by employees who may be disgruntled or publicity-seeking, and hence may not present a full picture of the underlying circumstances.

Fortunately, these conditions are readily amenable to governmental intervention. As a first pass, courts in countries where company employees work (primarily the United States) should decline to enforce as contrary to public policy nondisclosure agreements relating to information on matters of public concern, and there should be a rebuttable presumption (to reduce workers’ legal costs) that disclosures relating to work in the fields of “trust and safety” or “integrity,” broadly defined, are on matters of public concern.Footnote 6 Second, organizations created to facilitate platform governance, including those similar to the recommendations of this book, should be given the power to compel disclosure (through, for example, subpoenas) of information relevant to their work.

This proposal has the additional advantage of promoting, rather than impeding, market entry – the opposite of a regulatory moat, it would make it harder for established players with robust governance knowledge to use that knowledge to improve their relative market power. However, as a bias check, the reader should also be aware that it would also work to the advantage of emerging civil society organizations composed of workers in these roles, such as the Integrity Institute, with which I am affiliated.Footnote 7 It would also be potentially the target of company opposition motivated by their own interests in secrecy.Footnote 8

Interventions on Human Rights Law

If it’s true that platform companies are becoming “Digital Switzerlands” (Eichensehr Reference Eichensehr2019), then perhaps they ought to be direct subjects of the international order. There’s a scholarly literature focusing on the human rights obligations of such companies or proposing that domestic law as well as company policy ought to subject them to human rights standards – Kaye’s (Reference Kaye2019b) contribution is the most prominent. But why not cut out the middleman?Footnote 9

That somewhat abstract suggestion could be particularly viable in the context of the increased empowerment of workers and the general public described here and in Chapter 6. Social media councils, workplace councils, intermediate popular adjudicative bodies, and the like could make direct reports to international human rights governing bodies, and hence both alert those bodies of emergent threats as well as impede company efforts to conceal their responsibility after the fact. And international human rights bodies could be empowered to order companies to cease certain activities, or even to temporarily cease operation in certain countries during periods of crisis. Think again of the Myanmar genocide.

One question of course about the application of international law to companies is how its rules would be enforced (especially if they’re treated conventionally as “non-binding,” see, e.g., Douek Reference Douek2021, 40) – but this is notoriously a problem with respect to states as well, so difficulty of enforcement alone cannot be an objection to the application of international law. In fact, it may be easier to enforce international law against companies than states since, of course, companies lack armies and territorial rights of sovereignty, while they may be vulnerable to punishments imposed on them via restricting their access to a variety of internationally accessible or managed resources, such as network backbones and namespaces. International orders could also serve as a trigger for domestic action by governments in which companies operate. More aggressively, company executives could be prosecuted in international courts.

Such uses of international human rights law may be most appropriate in the contexts of preventing (or deterring future acts of culpability in) emergencies that require greater control of user behavior, such as the Myanmar genocide, election tampering or intimidation, or the events of January 6. This is essentially the opposite of most invocations of human rights law by scholars of platforms (with the notable exception of Wilson and Land (Reference Wilson and Land2021)), who tend to follow Kaye (Reference Kaye2019b) and focus on freedom of expression, typically in terms of concerns about excessive restriction of user behavior. A more appropriate focus from within human rights law, I contend, is on the responsibility to protect, which would require positive restrictions on platform communication in such emergencies.Footnote 10 This is so for two reasons.

First, subjecting platforms to free speech norms is problematic in virtue of the fact that, as Douek (Reference Douek2021, 51–56) explains, such norms are both contested and subject to substantial interpretive variation. I would add, consistent with the discussion in Chapter 2, that imposing parochial interpretations of such norms is objectionably colonial – why should US norms about things like the appropriate amount of respect to be shown to the Thai monarchy or the permissibility of visual depictions of the Prophet Mohammed be exported globally?

To flesh out Douek’s (quite sound) point in a bit more detail: while freedom of expression is obviously an important human rights value, in virtue of the extremely diverse collection of legitimate interpretations of the notion of freedom of expression across cultures – as well as the fact, illustrated by gamergate and US campaign finance law, that freedom of expression claims can often be made on behalf of speech that actually undermines others’ ability to speak – it seems like a poor candidate for a first human rights principle to apply as a universalistic standard to cross-national platforms. The idea of a human right to free expression works in the context of states as the object of human rights law because states have their own political communities with their own cultural traditions around speech – they have a greater capacity to legitimately interpret the universal human rights norm in a way that is particularly adapted to their distinct polity. While I argued in Chapter 5 that platforms can have an identity in a similar sense, the great diversity of cultures present on any of the larger platforms will necessarily make such an identity much thinner and less capable of supporting a clear interpretation of human rights norms surrounding free expression than, for example, the United States with its centuries of First Amendment law or a Muslim country with its distinctive religious norms about matters like the depiction of the Prophet.

Second is the simple fact that platforms have a much greater capacity to impair the right to be protected from great evils such as genocide, election manipulation, and the overthrow of democratic governments than they do to impair the right to free expression. This is a seemingly controversial claim, judging by the immense amounts of popular and scholarly ink spilled on platform “free speech,” but a moment’s reflection will suggest that it’s obviously true. Free speech exists in at least semi-competitive marketplaces where those who are “censored” by one platform can move to numerous other platforms as well as non-platform modalities of communication.Footnote 11 The reason that free speech makes sense as a binding norm for states rather than platforms is because states have soldiers and police who can shut off all methods of expression for a person by locking them in a cell or shooting them.

By contrast, if a platform fails to comply with the responsibility to protect, it can lead to direct and inescapable harm. A person killed because genocidal propaganda was spread on a platform cannot exit that platform to escape the harm. A country whose democracy is toppled because foreign agents spread misinformation on one platform is not somehow saved because some other platform does a better job. Competition doesn’t alleviate the dangers to which the responsibility to protect is directed. Only law can help.

Interventions on Competition Policy

In view of the problem of platform colonialism, governments ought to consider forms of competition policy that are compatible with market entry from non-US countries, particularly those in the Global South. For reasons described in Chapter 2, I am skeptical of policies that make it too easy for dangerous companies such as the proliferation of right-wing extremist social media platforms in the United States to enter the market; however, entry in markets currently underserved by existing systems of platform governance may be relatively free from this worry – if the big platform companies are neglecting the safety of users in some country anyway, we might as well make it easier for competitors from that country who might be more capable of engaging in governance to enter. And although such market entrants remain subject to the related worry that smaller and more local platforms may be more susceptible to bullying by authoritarian governments, it is far from obvious that larger platforms are immune from such bullying (witness Facebook’s relationship with the Republican party), especially given that effective governance in a country requires some local presence in that country, and any personnel (or empowered users) in a country are points of leverage for governmental coercion.Footnote 12

Grewal (Reference Grewal2008, 173–79) has argued for alterations to network structure that permit entry and interaction as a potential remedy to unjust kinds of network power. This idea seems to be helpful in the case of platform colonialism, as well as corresponding in part to policy initiatives that are already partly in place. For example, many advocates have argued for data portability and interoperability as a form of competition policy, and this is partly implemented in the European General Data Protection Regulation (Engels Reference Engels2016). In principle, it might be possible to implement such regulations in the United States, or more robust regulations in Europe, in a way that favors competitors from countries with lower GDP per capita, in minority languages, and the like. For example, laws requiring interoperability or API access to company data may explicitly limit the entities to which access must be granted in favor of smaller companies primarily serving users in the Global South. However, I make this suggestion with some hesitation, for I lack expertise in competition policy sufficient to have any confidence about its viability. But in the context of Grewal’s analysis of network power, such policy innovations would amount to conferring a greater degree of “compatibility” insofar as the resources of the platforms would be (partly) accessible using alternative corporate (and governance) entities.Footnote 13

Bonina et al. (Reference Bonina, Koskinen, Eaton and Gawer2021, 892) suggest that platforms in the Global South tend to have a greater degree of involvement by “NGOs, public institutions, governments, and developmental organizations.” They also identify the presence of “platform cooperatives … which rely on democratic decision-making and a shared ownership of the platform by workers and users” in the Global South (Bonina et al. Reference Bonina, Koskinen, Eaton and Gawer2021).Footnote 14 To the extent this is correct (no evidence is cited for the claim), this suggests that promoting platform competition from the Global South has a greater capacity to promote community empowerment, at least to the extent the NGOs and the like are run by people from the countries in question rather than from wealthy countries – an issue which warrants further investigation.

The reference to NGOs and platform cooperatives is a suitable place to close this book. At bottom, such innovations potentially represent a method of operating platforms that is separate from the imperatives of capitalism. In the absence of such imperatives, there is a substantially weakened incentive to create many of the features of our contemporary large platforms that have posed so many governance problems in the first place. For example, nonprofit platforms may have much less reason to build recommender algorithms that prioritize engagement and the drive for boundless growth even when that means serving toxic or inadequately protected markets.Footnote 15

In effect, the recommendations of this book may be seen through the lens of a kind of postcapitalist democracy. Rather than proposing the creation of new, public (and hence non-capitalistic) platforms, I have argued for recognizing the public character of the platforms we already have. They may nominally be private businesses operated for profit, but they carry out functions similar to those of states and exercise – with inexcusable clumsiness – immense amounts of power not only over their users but over the rest of the world.

Thus, the recommendations in this book ultimately amount to taking their quasi-public character and building quasi-public institutions to match it. However, I reject previous commentary which proposes to do so by transposing ideas from older forms of public-private hybridity such as applying the First Amendment to social media companies or treating platforms as (highly regulated and subject to open access requirements) public utilities like electrical company monopolies. Neither of these are appropriate options. The First Amendment is not an appropriate option because, as discussed above, social media companies operate across cultures with wildly different free expression norms, and, as discussed in Chapter 5, because to some extent they have their own distinctive voices attached to their identities as intermediate purposive organizations. The notion of a regulated public utility makes sense as a response to natural monopolies in essential goods and services, but regardless of what one thinks about the anticompetitive character of the platforms, the core governance problems they generate are not, as I argued in Chapter 2, due to a lack of competition. Moreover, we don’t want governments to directly be regulating the platforms too extensively, because, also as I argued in Chapter 2, governments often are doing so for the wrong reasons, such as in support of programs of political repression.

Instead, I propose to build something new, albeit rooted in the study of earlier kinds of governance: Directly democratic public-private governance structures meant to be genuinely inclusive and to be tailored, not to the problems of capitalism’s past, but to the problems of capitalism’s present and future. We are unlikely to return to a world without vast global platforms with diverse groups of users coming into conflict and generating endless novelty. But I believe we can build institutions that permit us, collectively, to adapt to the challenges this scale and diversity present, and hopefully retain the vast gains in interconnection, creativity, commerce, and free expression that those platforms allow while mitigating the threats to public safety and democracy they present. Let’s get building!

Footnotes

1 This is why Chapter 6 focused most of its energy on participatory councils to be operated separately from any individual company – such that market entrants can simply plug into an existing system – but which would specifically not be the generators of some kind of scheme of universal rules for all companies to obey and in which companies would have input (and hence into which bigger companies are likely to have more input).

2 At the limit, I suppose a company’s board and CEO would have to supervise both its lobbyists and its rule makers, however, the corporate law world has devised internal controls in other contexts notwithstanding the existence of CEOs, so I assume that there are regulatory options available. For example, executives supervising content policy functions could have some protection against termination without cause to give them some degree of decisional independence even from CEOs.

3 For discussion, see Bambauer, Rollins, and Yesue (Reference Bambauer, Rollins and Yesue2022); Bhagwat (Reference Bhagwat2021); Goldman and Miers (Reference Goldman and Miers2021); Langvardt (Reference Langvardt2021); Rozenshtein (Reference Rozenshtein2021); Kosseff (Reference Kosseff2019).

4 Cf. Elster (Reference Elster2000) on the ways in which constraints protecting against external pressure can promote the overall freedom of an agent.

5 As discussed in Chapter 2, some countries, such as France, do have versions of a general workers’ council system.

6 This includes the work of content moderators, who are also subject to such nondisclosure agreements (Roberts Reference Roberts2019, 73).

7 I serve on the Integrity Institute’s nonprofit board, as well as its Community Advisory Board, and am a fellow of the Institute. Such organizations face challenges originating in company confidentiality rules in building networks of workers who can be useful to the public and to policymakers.

8 This opposition may not be entirely self-interested. Arguably, eliminating the shroud of secrecy around such work would pose security threats that also expose the general public to harm by permitting bad actors to game the integrity processes (cf. Roberts Reference Roberts2019, 92–94 on that rationale). However, to a substantial extent, the horse has already left that barn given the vast number of leaks from companies like Facebook. In addition, the longstanding record of abuses from organizations which maintain secrecy in their rule-enforcement activities (like essentially every US law enforcement organization) suggests that this is one situation where the tradeoffs likely weigh on the side of transparency.

9 I am no specialist in international law. But at least one scholar who is, Molly Land, has argued that Article 19 of the International Covenant on Civil and Political Rights (covering freedom of expression) by its terms applies to nonstate actors (Land Reference Land2013, 443–49). In another article, Land (Reference Land2019) argues that to the extent state actors delegate regulatory authority to private companies in carrying out speech-related human rights violations, their actions are also reachable by international law as state action. In particular, she argues that systems of intermediary liability like the Digital Millennium Copyright Act and Germany’s NetzDG can constitute delegated state authority (Land Reference Land2019, 404–8).

10 For a summary of the responsibility to protect in the state context, see McClean (Reference McClean2008). In addition to the responsibility to protect, it may also be appropriate to apply international human rights principles with respect to the rule of law which can backstop some of the constraint described in Chapter 4.

11 I don’t deny that platform “censorship” could impair a user’s expressive interests to some degree – I simply observe that the capacity of such “censorship” to so impair is limited by the presence of even less-good alternatives.

12 On the susceptibility of local company personnel to government coercion, whether authoritarian-lawless or democratic-lawful, see for example, Haynes (Reference Haynes2016), Horwitz and Purnell (Reference Horwitz and Purnell2021).

13 Similarly, the multilevel system of councils described in Chapter 6 promotes what Grewal calls “malleability” insofar as it would introduce the capacity to change platforms in response to the needs of diverse communities of users and stakeholders.

14 As I read them, it is unclear whether cooperatives are relatively over represented in those countries.

15 Even in nonprofit platforms those incentives might not be completely absent – as discussed in Chapter 1, users have limited time to devote to a multiplicity of platforms, so those platforms which do not grow enough to leverage some degree of network effects or otherwise attract users to spend their time there might just die off, even if they are not operated for the revenue they generate.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Conclusion
  • Paul Gowder, Northwestern University, Illinois
  • Book: The Networked Leviathan
  • Online publication: 20 July 2023
  • Chapter DOI: https://doi.org/10.1017/9781108975438.008
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Conclusion
  • Paul Gowder, Northwestern University, Illinois
  • Book: The Networked Leviathan
  • Online publication: 20 July 2023
  • Chapter DOI: https://doi.org/10.1017/9781108975438.008
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Conclusion
  • Paul Gowder, Northwestern University, Illinois
  • Book: The Networked Leviathan
  • Online publication: 20 July 2023
  • Chapter DOI: https://doi.org/10.1017/9781108975438.008
Available formats
×