To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We derive large-sample and other limiting distributions of components of the allele frequency spectrum vector, $\mathbf{M}_n$, joint with the number of alleles, $K_n$, from a sample of n genes. Models analysed include those constructed from gamma and $\alpha$-stable subordinators by Kingman (thus including the Ewens model), the two-parameter extension by Pitman and Yor, and a two-parameter version constructed by omitting large jumps from an $\alpha$-stable subordinator. In each case the limiting distribution of a finite number of components of $\mathbf{M}_n$ is derived, joint with $K_n$. New results include that in the Poisson–Dirichlet case, $\mathbf{M}_n$ and $K_n$ are asymptotically independent after centering and norming for $K_n$, and it is notable, especially for statistical applications, that in other cases the limiting distribution of a finite number of components of $\mathbf{M}_n$, after centering and an unusual $n^{\alpha/2}$ norming, conditional on that of $K_n$, is normal.
During the 20th century, dealing with grief through an ongoing involvement with the deceased (such as speaking to their grave) was seen as pathological by Western authors such as Sigmund Freud. Nowadays, we are presented with the opportunity to continue interacting with digital representations of the deceased. As a result, the paper adopts an Ubuntu perspective, i.e., a sub-Saharan African philosophy focussed on community and relationship to provide a toolkit for using this emerging technology. I will argue that the Ubuntu framework I propose contributes to the use of griefbots in two ways. The first is that it shows that it is morally permissible to use griefbots to assuage our grief. The second is that it delineates how we can ethically use the technology. To do so, I split my analysis into four sections. In the first section, I show that meaningful relationships can occur between the bereaved and griefbots. This will be done by exploring the Western theory of continuing bonds proposed by Dennis Klass, Phyllis Silverman and Steven Nickman. In my second, I flesh out my Ubuntu framework according to Thaddeus Metz’s accounts on Ubuntu as a modal-relational theory. In my third section, I apply my Ubuntu framework to the case of Roman Mazurenko. Furthermore, I consider some counterarguments to the Ubuntu framework regarding privacy, commercialisation and people replacement. Finally, I conclude that, despite these limitations, the Ubuntu framework positively contributes to determining whether we should communicate with the dead through griefbots to assuage our grief.
from
Part I
-
The Philosophy and Methodology of Experimentation in Sociology
Davide Barrera, Università degli Studi di Torino, Italy,Klarita Gërxhani, Vrije Universiteit, Amsterdam,Bernhard Kittel, Universität Wien, Austria,Luis Miller, Institute of Public Goods and Policies, Spanish National Research Council,Tobias Wolbring, School of Business, Economics and Society at the Friedrich-Alexander-University Erlangen-Nürnberg
This chapter focuses on different research designs in experimental sociology. Most definitions of what constitutes an experiment converge on the idea that the experimenter "control" the phenomenon under investigation, thereby setting the conditions under which the phenomenon is observed and analyzed. Typically, the researcher exerts experimental control by creating two situations that are virtually identical, except for one element that the researcher introduces or manipulates in only one of the situations. The purpose of this exercise is to observe the effects of such manipulation by comparing it with the outcomes of the situation in which the manipulation is absent. One way to look at how the implementation of this rather straightforward exercise produces a variety of designs is by focusing on the relationship that experimental design bears with the theory that inspires it. Therefore, we begin this chapter with a discussion of the relationship between theory and experimental design before turning to a description of the most important features of various types of designs. The chapter closes with a short overview of experiments in different settings such as laboratory, field, and multifactorial survey experiments.
Chapter 6 opens with extensions of martingale theory in two directions: to σ-finite measures and to random variables with values in a Banach space. In §6.2 I prove Burkholder’s Inequality for martingales with values in a Hilbert space. The derivation that I give is essentially the same as Burkholder’s second proof, the one that gives optimal constants. Finally, the results in §6.1 are used in §6.3 to derive Birkhoff’s Individual Ergodic Theorem and a couple of its applications.
We consider linear-fractional branching processes (one-type and two-type) with immigration in varying environments. For $n\ge0$, let $Z_n$ count the number of individuals of the nth generation, which excludes the immigrant who enters the system at time n. We call n a regeneration time if $Z_n=0$. For both the one-type and two-type cases, we give criteria for the finiteness or infiniteness of the number of regeneration times. We then construct some concrete examples to exhibit the strange phenomena caused by the so-called varying environments. For example, it may happen that the process is extinct, but there are only finitely many regeneration times. We also study the asymptotics of the number of regeneration times of the model in the example.
We prove that any increasing sequence of real numbers with average gap $1$ and Poisson pair correlations has some gap that is at least $3/2+10^{-9}$. This improves upon a result of Aistleitner, Blomer, and Radziwiłł.
Section 7.1 provides a brief introduction to the theory of martingales with a continuous parameter. As anyone at all familiar with the topic knows, anything approaching a full account of this theory requires much more space than a book like this can provide. Thus, I deal with only its most rudimentary aspects, which, fortunately, are sufficient for the applications to Brownian motion that I have in mind. Namely, in §7.2 I first discuss the intimate relationship between continuous martingales and Brownian motion (Lévy’s martingale characterization of Brownian motion), then derive the simplest (and perhaps most widely applied) case of the Doob–Meyer Decomposition Theory, and finally show what Burkholder’s Inequality looks like for continuous martingales. In the concluding section, §7.3, the results in §7.1 and §7.2 are applied to derive the Reflection Principle for Brownian motion.
Chapter 12 is the conclusion. It presents a discussion of how the components of performance evaluation for learning algorithms discussed throughout the book unify into an overall framework for in-laboratory evaluation. This is followed by a discussion of how to move from a laboratory setting to a deployment setting based on the material covered in the last part of the book. We then discuss the potential social consequences of machine learning technology deployment together with their causes, and advocate for the consideration of these consequences as part of the evaluation framework. We follow this discussion with a few concluding remarks.
from
Part III
-
Methodological Challenges of Experimentation in Sociology
Davide Barrera, Università degli Studi di Torino, Italy,Klarita Gërxhani, Vrije Universiteit, Amsterdam,Bernhard Kittel, Universität Wien, Austria,Luis Miller, Institute of Public Goods and Policies, Spanish National Research Council,Tobias Wolbring, School of Business, Economics and Society at the Friedrich-Alexander-University Erlangen-Nürnberg
Experimental practices developed in different scientific disciplines following different historical trajectories. Thus, standard experimental procedures differ starkly between disciplines. One of the most controversial issues is the use of deception as a methodological device. Psychologists do not conduct a study involving deception unless they have determined that the use of deceptive techniques is justified by the study’s significant prospective scientific, educational, or applied value and that effective nondeceptive alternative procedures are not feasible. In experimental economics it is strictly forbidden and a ban on experiments involving deception is enforced by all major economic journals. In the sociological scientific community, there is no clear consensus on the matter. Importantly, the disagreement is sometimes based on ethical considerations, but more often it is based on pragmatic grounds: the anti-deception camp argues that deceiving participants leads to invalid results, while the other side argues that deception has little negative impact and, under certain conditions, can even enhance validity. In this chapter, we first discuss the historical reasons leading to the emergence of such different norms in different fields and then analyze and separate ethical and pragmatic concerns. Finally, we propose some guidelines to regulate the use of deception in sociological experiments.
This chapter is devoted to the study of infinitely divisible laws. It begins in §3.1 with a few refinements (especially the Lévy Continuity Theorem) of the Fourier techniques introduced in §2.3. These play a role in §3.2, where the Lévy–Khinchine formula is first derived and then applied to the analysis of stable laws.
This chapter provides the tools to compute catastrophe (CAT) risk, which represents a compound measure of the likelihood and magnitude of adverse consequences affecting structures, individuals, and valuable assets. The process consists of first establishing an inventory of assets (here real or simulated) exposed to potential hazards (exposure module). Estimating the expected damage resulting from a given hazard load (according to Chapter 2) is the second crucial step in the assessment process (vulnerability module). The application of damage functions to exposure data forms the basis for calculating loss estimates (loss module). To ensure consistency across perils, the mean damage ratio is used as the main measure for damage footprints D(x,y), with the final loss footprints simply expressed as L(x,y) = D(x,y) × ν(x,y), where ν(x,y) represents the exposure footprint. Damage functions are provided for various hazard loads: blasts (explosions and asteroid impacts), earthquakes, floods, hail, landslides, volcanic eruptions, and wind.
The (re)insurance industry is maturing in its ability to measure and quantify Cyber Risk. The risk and threat landscapes around cyber continue to evolve, in some cases rapidly. The threat actor environment can change, as well as the exposure base, depending on a variety of external factors such as political, economic and technological factors. The rapidly changing environment poses interesting challenges for the risk and capital actuaries across the market. The ability to accurately reflect all sources of material losses from cyber events is challenging for capital models and the validation exercise. Furthermore, having a robust enterprise risk management (ERM) framework supporting the business to evaluate Cyber Risk is an important consideration to give the board comfort that Cyber Risk is being effectively understood and managed by the business. This paper discusses Cyber Risk in relation to important risk and capital model topics that actuaries should be considering. It is challenging for the capital models to model this rapidly changing risk in a proportionate way that can be communicated to stakeholders. As model vendors continue to mature and update models, the validation of these models and the ultimate cyber capital allocation is even more complex. One’s view of risk could change rapidly from year to year, depending on the threat or exposure landscape as demonstrated by the ransomware trends in recent years. This paper has been prepared primarily with General Insurers in mind. However, the broader aspects of capital modelling, dependencies and ERM framework are relevant to all disciplines of the profession.