To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Let $\mathcal{F}$ be an intersecting family. A $(k-1)$-set $E$ is called a unique shadow if it is contained in exactly one member of $\mathcal{F}$. Let ${\mathcal{A}}=\{A\in \binom{[n]}{k}\colon |A\cap \{1,2,3\}|\geq 2\}$. In the present paper, we show that for $n\geq 28k$, $\mathcal{A}$ is the unique family attaining the maximum size among all intersecting families without unique shadow. Several other results of a similar flavour are established as well.
Plasmodium vivax is the most frequent and widely distributed cause of recurring malaria. It is a public health issue that mostly occurs in Southeast Asia, followed by the Middle East, Latin, and South Americas and sub-Saharan Africa. Although it is commonly known as an etiologic agent of malaria with mild clinical manifestations, it can lead to severe complications. It has been neglected and understudied for a long time, due to its low mortality, culturing infeasibility, and mild clinical manifestations in comparison to P. falciparum. Despite the mild clinical issues commonly rose for P. vivax, the correlation between the clinical manifestations exhibited by patients with severe and non-severe complications and the genetic diversity of parasites responsible for the disease is not clear. An investigation was carried out between 2011 and 2021 on patients referred to Avicenne Hospital for suspected P. vivax infection. Upon arrival, they underwent clinical and biological examinations. The lateral flow test and LAMP-PCR confirmed the presence of malaria parasites, Plasmodium sp‥ Microscopic examination revealed the presence of Plasmodium parasites with a parasitaemia between 0.01 and 0.38%. Conventional PCR amplifications targeting 714 bp DNA fragment of small subunit ribosomal DNA (SSU-rDNA) followed by bidirectional sequencing allowed us to identify the parasites as P. vivax. The neighbor-joining (NJ) phylogenetic tree revealed that P. vivax sequences processed in the present study clustered in two well-differentiated and supported clades. It included a bigger clade including P. vivax specimens of all our patients together with homonymous sequences from Indonesia, India, and El Salvador and the second clade encompassed the sequences from Yemen and India. In addition, the clustering displayed by the median-joining network agreed well with the topology of the phylogenetic tree generated by the neighbor-joining analysis. No correlation between the clinical manifestation of patients with severe and non-severe complications, encompassing diverse geographical origins, and the genetic diversity of parasites was observed since all sequences demonstrated a high homogeneity. These findings can be helpful in getting knowledge about the population genetics of P. vivax and taking proper control management strategies against these parasites.
Candidates arrive sequentially for an interview process which results in them being ranked relative to their predecessors. Based on the ranks available at each time, a decision mechanism must be developed that selects or dismisses the current candidate in an effort to maximize the chance of selecting the best. This classical version of the ‘secretary problem’ has been studied in depth, mostly using combinatorial approaches, along with numerous other variants. We consider a particular new version where, during reviewing, it is possible to query an external expert to improve the probability of making the correct decision. Unlike existing formulations, we consider experts that are not necessarily infallible and may provide suggestions that can be faulty. For the solution of our problem we adopt a probabilistic methodology and view the querying times as consecutive stopping times which we optimize with the help of optimal stopping theory. For each querying time we must also design a mechanism to decide whether or not we should terminate the search at the querying time. This decision is straightforward under the usual assumption of infallible experts, but when experts are faulty it has a far more intricate structure.
We consider the problem of optimally maintaining an offshore wind farm in which major components progressively degrade over time due to normal usage and exposure to a randomly varying environment. The turbines exhibit both economic and stochastic dependence due to shared maintenance setup costs and their common environment. Our aim is to identify optimal replacement policies that minimize the expected total discounted setup, replacement, and lost power production costs over an infinite horizon. The problem is formulated using a Markov decision process (MDP) model from which we establish monotonicity of the cost function jointly in the degradation level and environment state and characterize the structure of the optimal replacement policy. For the special case of a two-turbine farm, we prove that the replacement threshold of one turbine depends not only on its own state of degradation but also on the state of degradation of the other turbine in the farm. This result yields a complete characterization of the replacement policy of both turbines by a monotone curve. The policies characterized herein can be used to optimally prescribe timely replacements of major components and suggest when it is most beneficial to share costly maintenance resources.
This paper analyzes the training process of generative adversarial networks (GANs) via stochastic differential equations (SDEs). It first establishes SDE approximations for the training of GANs under stochastic gradient algorithms, with precise error bound analysis. It then describes the long-run behavior of GAN training via the invariant measures of its SDE approximations under proper conditions. This work builds a theoretical foundation for GAN training and provides analytical tools to study its evolution and stability.
While the previous chapter covered probability on events, in this chapter we will switch to talking about random variables and their corresponding distributions. We will cover the most common discrete distributions, define the notion of a joint distribution, and finish with some practical examples of how to reason about the probability that one device will fail before another.
The general setting in statistics is that we observe some data and then try to infer some property of the underlying distribution behind this data. The underlying distribution behind the data is unknown and represented by random variable (r.v.) . This chapter will briefly introduce the general concept of estimators, focusing on estimators for the mean and variance.
This chapter deals with one of the most important aspects of systems modeling, namely the arrival process. When we say “arrival process” we are referring to the sequence of arrivals into the system. The most widely used arrival process model is the Poisson process. This chapter defines the Poisson process and highlights its properties. Before we dive into the Poisson process, it will be helpful to review the Exponential distribution, which is closely related to the Poisson process.
This chapter begins our study of Markov chains, specifically discrete-time Markov chains. In this chapter and the next, we limit our discussion to Markov chains with a finite number of states. Our focus in this chapter will be on understanding how to obtain the limiting distribution for a Markov chain.
In the last two chapters we studied many tail bounds, including those from Markov, Chebyshev, Chernoff and Hoeffding. We also studied a tail approximation based on the Central Limit Theorem (CLT). In this chapter we will apply these bounds and approximations to an important problem in computer science: the design of hashing algorithms. In fact, hashing is closely related to the balls-and-bins problem that we recently studied in Chapter 19.
This part of the book is devoted to randomized algorithms. A randomized algorithm is simply an algorithm that uses a source of random bits, allowing it to make random moves. Randomized algorithms are extremely popular in computer science because (1) they are highly efficient (have low runtimes) on every input, and (2) they are often quite simple.
In the previous chapter, we studied individual continuous random variables. We now move on to discussing multiple random variables, which may or may not be independent of each other. Just as in Chapter 3 we used a joint probability mass function (p.m.f.), we now introduce the continuous counterpart, the joint probability density function (joint p.d.f.). We will use the joint p.d.f. to answer questions about the expected value of one random variable, given some information about the other random variable.
This final part of the book is devoted to the topic of Markov chains. Markov chains are an extremely powerful tool used to model problems in computer science, statistics, physics, biology, and business – you name it! They are used extensively in AI/machine learning, computer science theory, and in all areas of computer system modeling (analysis of networking protocols, memory management protocols, server performance, capacity provisioning, disk protocols, etc.). Markov chains are also very common in operations research, including supply chain, call center, and inventory management.
We have studied several common continuous distributions: the Uniform, the Exponential, and the Normal. However, if we turn to computer science quantities, such as file sizes, job CPU requirements, IP flow times, and so on, we find that none of these are well represented by the continuous distributions that we’ve studied so far. To understand the type of distributions that come up in computer science, it’s useful to start with a story.
This chapter introduces randomized algorithms. We start with a discussion of the differences between randomized algorithms and deterministic algorithms. We then introduce the two primary types of randomized algorithms: Las Vegas algorithms and Monte Carlo algorithms. This chapter and its exercises will contain many examples of randomized algorithms, all of the Las Vegas variety. In Chapter 22 we will turn to examples of the Monte Carlo variety.