To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we consider a nonstandard multidimensional risk model, in which the claim sizes $\{\vec{X}_k, k\ge 1\}$ form an independent and identically distributed random vector sequence with dependent components. By assuming that there exists the regression dependence structure between inter-arrival time and the claim-size vectors, we extend the regression dependence to a more practical multidimensional risk model. For the univariate marginal distributions of claim vectors with consistently varying tails, we obtain the precise large deviation formulas for the multidimensional risk model with the regression size-dependent structure.
Estimates of global and national emissions of carbon dioxide (CO2) are important for scientific understanding and public policy on global climate change. Estimates published annually often see revisions of estimates from previous years. Revisions of data on CO2 emissions reflect revisions of the energy data from which CO2 emissions are estimated. Learning is taking place as missing values are compiled, estimated values are revised, and data management systems are updated. Revisions are a frequent feature of the database. Revisions are widespread among countries, commodities, and transactions. We have examined 11 annual reports of the United Nations Energy Statistics Database (those published from 2010 to 2020) to see in the detailed statistics what values are being changed and what are the magnitudes and patterns of change. They are most common in recent years, among developed countries, and among data on liquid fuels. Revisions are generally small and there are no indications of systematic manipulation or bias. Revisions of specific numbers are believed to represent improvements in accuracy but lack of revisions does not point toward accuracy. This examination of revisions does not permit by itself a quantitative estimate of the data uncertainty but it does suggest that the estimates of global and national totals of CO2 emissions are generally consistent and that both absolute values and trends are reliable over time and sufficiently accurate for scientific understanding and public policy.
Chapter 8 analyses the use of AI and ADM tools in welfare and surveillance through the lens of critical race studies. Aitor Jiménez and Ainhoa Nadia Douhaibi point to the necessity of building a non-Anglocentric theoretical framework from which to study a new global phenomenon: the digital welfare and surveillance state. Accordingly, the authors frame its rise within the wider context of the Southern European iteration of racial neoliberalism, what they coin as the Islamophobic Consensus. As the chapter demonstrates, the digital welfare and surveillance state does not rely on the same technologies, focus on the same subjects, and pursues the same objectives in every context. On the contrary, it draws on contextual genealogies of domination, specific socioeconomic structures, and distinctive forms of distributing power. The authors provide an empirical analysis on the ways the Islamophobic Consensus is being operationalised in Catalonia and expose the overlapped racism mechanisms governing the lives of racialized black and brown young adults. The chapter demonstrates how ADM technologies designed to govern “deviated”, “risky”, and “dangerous” Muslim youth “radicals” connect with colonial punitive governmental strategies.
The chapter is structured in two parts. The first part analyses the surveillance-governmental automated apparatus deployed over Islamic communities in Catalunya. The second part frames the ideological, epistemological, and historical fundamentals of the Southern European way to racial neoliberalism, here labelled as the Islamophobic Consensus. Drawing on surveillance and critical race studies, the authors synthesise the defining features that distinguish this model of domination from other iterations of neoliberal racism.
Chapter 10 explores the increasingly blurred line between public and private authority in designing and applying the AI tools, and searches for appropriate safeguards necessary to ensure the rule of law and protection fundamental rights. ADM tools are increasingly sorting individuals out, with important consequences. Governments use such tools to rank and rate their citizens, creating a data-driven infrastructure of preferences that condition people’s behaviours and opinions. Some commentators point to the rule of law deficits in the automation of government functions, others emphasize how such technologies systematically exacerbate inequalities, and still others argue that a society constantly being scored, profiled, and predicted threatens due process and justice generally. Using the case of Houston Federation of Teachers v. Houston Independent School District as a starting point, Lin asks some critical questions still left unanswered. How are AI and ADM tools reshaping professions like education? Does the increasingly blurred line between public and private authority in designing and applying these algorithmic tools pose new threats? Premised upon these scholarly and practical inquiries, this chapter seeks to identify appropriate safeguards necessary to ensure rule of law values, protect fundamental rights, and harness the power of automated governments.
Chapter 5 examines traditional data protection law’s regulatory outcomes. It shows why data rights and rules, while desirable, don’t address the core problems of the contracts model and can’t work well without the liability model. Data rights unintendedly impose administrative burdens on those they protect. Mandatory rules better address power asymmetries and manipulation than defaults. But our procedural rules overregulate while they underprotect: they benefit large players by adversely affecting new players and they allow companies to comply merely by following box-ticking exercises. Against this backdrop, laws legitimize exploitation that can be executed while remaining compliant. A risk-reduction approach based on standards would reduce informational exploitation.
In the future, administrative agencies will rely increasingly on digital automation powered by AI. Can U.S. administrative law accommodate such a future? Not only might an automated state readily meet longstanding administrative law principles, but the responsible use of AI might perform even better than the status quo in terms of fulfilling administrative law’s core values of expert decision-making and democratic accountability. AI governance clearly promises more accurate, data-driven decisions. Moreover, due to their mathematical properties, AI and ADM tools might well prove to be more faithful agents of democratic institutions. Yet even if an automated state was smarter and more accountable, it might risk being less empathic. Although the degree of empathy in existing human-driven bureaucracies should not be overstated, a large-scale shift to the use of AI tools by government will pose a new challenge for administrative law: ensuring that an automated state is also an empathic one.
Conversational remembering entails that people engage in recalling past experiences, which may themselves have been shared. Conversational remembering comes with social benefits for the person telling the narrative and the one receiving it (e.g., developing and strengthening friendships, fostering entertainment, and consolidating group identity). COVID-19 lockdowns have significantly affected social interaction, including face-to-face interactions where conversational remembering occurs. The aim of this study was to explore how WhatsApp group messages supported conversational remembering in a large group of friends living in Buenos Aires where a complete lockdown was established between 19 March 2020 and 6 November 2020. To accomplish such aim, we conducted a mixed-methods longitudinal study. The data consisted of 32,810 WhatsApp group messages collected over a period of 700 consecutive days, from 13 April 2019 to 13 March 2021. Our study shows that WhatsApp group messages enabled group members to keep connected during the COVID-19 lockdown period. This occurred by remembering together situations, events, and actions associated with the group's identity. The use of WhatsApp group messages may have represented an adaptive collective behaviour in response to changes in global social norms.
The channels identified as highly vulnerable to money laundering activity are money remitters (both licensed and underground operators), corporate vehicles, designated non-financial businesses and professions, and the banking sector. As gatekeepers of the financial system, banks are required to develop and implement policies and practices that mitigate the money laundering risks that apply to them. Banks that use AI have the potential to make anti-money laundering (AML) measures faster, cheaper, and more efficient. AI could help to identify risks and respond to, communicate and monitor suspicious activity more effectively, ultimately assisting banks in maintaining compliance with AML standards. This chapter aims to shed light on the advantages and challenges of adopting AI to mitigate money-laundering risks and protect the integrity of the financial system.
Chapter 3 shows why the contracts model doesn’t work: consent is absent in the information economy. Privacy harm can’t be seen as a risk that people accept in exchange for a service. Inferences, relational data, and de-identified data aren’t captured by consent provisions. Consent is unattainable in the information economy more broadly because the dynamic between corporations and users is plagued with uneven knowledge, inequality, and a lack of choices. Data harms are collective and unknowable, making individual choices to reduce them impossible. Worse, privacy has a moral hazard problem: corporations have incentives to behave against our best interests, creating profitable harms after obtaining agreements. Privacy’s moral hazard leads to informational exploitation. One manifestation of valid consent in the information economy are consent refusals. We can consider them by thinking of people’s data as part of them, as their bodies are.
We consider the minimum spanning tree problem on a weighted complete bipartite graph $K_{n_R, n_B}$ whose $n=n_R+n_B$ vertices are random, i.i.d. uniformly distributed points in the unit cube in $d$ dimensions and edge weights are the $p$-th power of their Euclidean distance, with $p\gt 0$. In the large $n$ limit with $n_R/n \to \alpha _R$ and $0\lt \alpha _R\lt 1$, we show that the maximum vertex degree of the tree grows logarithmically, in contrast with the classical, non-bipartite, case, where a uniform bound holds depending on $d$ only. Despite this difference, for $p\lt d$, we are able to prove that the total edge costs normalized by the rate $n^{1-p/d}$ converge to a limiting constant that can be represented as a series of integrals, thus extending a classical result of Avram and Bertsimas to the bipartite case and confirming a conjecture of Riva, Caracciolo and Malatesta.
Social welfare has long been a priority area for digitisation and more recently for ADM. Digitisation and ADM can either advance or threaten socio-economic rights of the marginalised. Current Australian examples include the roll-out of on-line and apps-based client interfaces and compliance technologies in Centrelink. Others include work within the National Disability Insurance Scheme (NDIS) on development of virtual assistants or use of AI to leverage existing data sets to aid or displace human decision-making. Drawing on these examples and other recent experience, this chapter reviews the adequacy of traditional processes of public policy development, public administration, and legal regulation/redress in advancing and protecting the socio-economic rights of the marginalised in the rapidly emerging automated welfare state. It is argued that protections are needed against the power of ADM to collapse program design choices so that outliers, individualisation, complexity, and discretion are excluded or undervalued. It is suggested that innovative new processes may be needed, such as genuine co-design and collaborative fine-tuning of ADM initiatives, new approaches to (re)building citizen trust and empathy in an automated welfare state, and creative new ways of ensuring equal protection of the socio-economic rights of the marginalised in social services and responsiveness to user interests.
Chapter 2 shows the falseness of two ideas that underlie the central elements of privacy law: that people make fully rational privacy choices and that they don’t care about their privacy. These notions create a dissonance between law and reality, which prevents laws from providing meaningful privacy protections. Contrary to rationality, context has an outsized impact on our privacy decisions and we can’t understand what risks are involved in our privacy “choices,” particularly with AI inferences. The notion that we’re apathetic is prevalent in popular discourse about how much people share online and the academic literature about “the privacy paradox.” Dismantling the myth of apathy shows there’s no privacy paradox. People simply face uncertainty and unknowable risks. People make privacy choices in a context of anti-privacy design, such as dark patterns. In this process, we’re manipulated by corporations, who are more aware of our biases than regulators are.
Chapter 1 ties together the problems of central elements of privacy law: the individual choice-based system, the fair information principles that originated it, the view that privacy is about secrecy, and dichotomies such as public versus private. We don’t have actual choices about our data beyond mechanically agreeing to privacy policies because we lack outside options and information such as what the choice means and what risk we’re taking on by agreeing. The choice-based approach creates a false binary of secret and open information when, in reality, privacy is a spectrum. The idea that someone, at any given time, has either total privacy or no privacy at all is unfounded. Additionally, data are bundled: you can’t reveal just one thing without letting companies infer other things. Reckoning with this reality defeats the popular “I have nothing to hide” argument, which traces back to Joseph Goebbels.