To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study provides an econometric methodology to test a linear structural relationship among economic variables. We propose the so-called distance-difference (DD) test and show that it has omnibus power against arbitrary nonlinear structural relationships. If the DD-test rejects the linear model hypothesis, a sequential testing procedure assisted by the DD-test can consistently estimate the degree of a polynomial function that arbitrarily approximates the nonlinear structural equation. Using extensive Monte Carlo simulations, we confirm the DD-test’s finite sample properties and compare its performance with the sequential testing procedure assisted by the J-test and moment selection criteria. Finally, through investigation, we empirically illustrate the relationship between the value-added and its production factors using firm-level data from the United States. We demonstrate that the production function has exhibited a factor-biased technological change instead of Hicks-neutral technology presumed by the Cobb–Douglas production function.
We have previously shown that the geographic routing’s greedy packet forwarding distance (PFD), in dissimilarity values of its average measures, characterizes a mobile ad hoc network’s (MANET) topology by node size. In this article, we demonstrate a distribution-based analysis of the PFD measures that were generated by two representative greedy algorithms, namely GREEDY and ELLIPSOID. The result shows the potential of the distribution-based dissimilarity learning of the PFD in topology characterizing. Characterizing dynamic MANET topology supports context-aware performance optimization in position-based or geographic packet routing.
It was recently proven that the correlation function of the stationary version of a reflected Lévy process is nonnegative, nonincreasing, and convex. In another branch of the literature it was established that the mean value of the reflected process starting from zero is nondecreasing and concave. In the present paper it is shown, by putting them in a common framework, that these results extend to substantially more general settings. Indeed, instead of reflected Lévy processes, we consider a class of more general stochastically monotone Markov processes. In this setup we show monotonicity results associated with a supermodular function of two coordinates of our Markov process, from which the above-mentioned monotonicity and convexity/concavity results directly follow, but now for the class of Markov processes considered rather than just reflected Lévy processes. In addition, various results for the transient case (when the Markov process is not in stationarity) are provided. The conditions imposed are natural, in that they are satisfied by various frequently used Markovian models, as illustrated by a series of examples.
Combining cross-sectional and time-series data is a long and well-established practice in empirical economics. We develop a central limit theory that explicitly accounts for possible dependence between the two datasets. We focus on common factors as the mechanism behind this dependence. Using our central limit theorem (CLT), we establish the asymptotic properties of parameter estimates of a general class of models based on a combination of cross-sectional and time-series data, recognizing the interdependence between the two data sources in the presence of aggregate shocks. Despite the complicated nature of the analysis required to formulate the joint CLT, it is straightforward to implement the resulting parameter limiting distributions due to a formal similarity of our approximations with Murphy and Topel’s (1985, Journal of Business and Economic Statistics 3, 370–379) formula.
Reducing negative attitudes toward older adults is an urgent issue. A previous study has conducted “stereotype embodiment theory”-based interventions (SET interventions) that present participants with the contents of SET and related empirical findings. I focus on the subjective time to become older (the perception of how long people feel it will be before they become old) as a mechanism for the effect of SET interventions. I make the SET intervention group and the control group in which the participants are presented with an irrelevant vignette. The data from 641 participants (M = 31.97 years) were analyzed. Consequently, the SET intervention shortened the subjective time to become older and reduced negative attitudes toward older adults. When considering SET interventions, it would be useful to focus not only on the self-interested motives to avoid age discrimination but also on the subjective time to become older.
We present a study of the joint distribution of both the state of a level-dependent quasi-birth–death (QBD) process and its associated running maximum level, at a fixed time t: more specifically, we derive expressions for the Laplace transforms of transition functions that contain this information, and the expressions we derive contain familiar constructs from the classical theory of QBD processes. Indeed, one important takeaway from our results is that the distribution of the running maximum level of a level-dependent QBD process can be studied using results that are highly analogous to the more well-established theory of level-dependent QBD processes that focuses primarily on the joint distribution of the level and phase. We also explain how our methods naturally extend to the study of level-dependent Markov processes of M/G/1 type, if we instead keep track of the running minimum level instead of the running maximum level.
As relational event models are an increasingly popular model for studying relational structures, the reliability of large-scale event data collection becomes more and more important. Automated or human-coded events often suffer from non-negligible false-discovery rates in event identification. And most sensor data are primarily based on actors’ spatial proximity for predefined time windows; hence, the observed events could relate either to a social relationship or random co-location. Both examples imply spurious events that may bias estimates and inference. We propose the Relational Event Model for Spurious Events (REMSE), an extension to existing approaches for interaction data. The model provides a flexible solution for modeling data while controlling for spurious events. Estimation of our model is carried out in an empirical Bayesian approach via data augmentation. Based on a simulation study, we investigate the properties of the estimation procedure. To demonstrate its usefulness in two distinct applications, we employ this model to combat events from the Syrian civil war and student co-location data. Results from the simulation and the applications identify the REMSE as a suitable approach to modeling relational event data in the presence of spurious events.
Aeroengine performance is determined by temperature and pressure profiles along various axial stations within an engine. Given limited sensor measurements, we require a statistically principled approach for inferring these profiles. In this paper we detail a Bayesian methodology for interpolating the spatial temperature or pressure profile at axial stations within an aeroengine. The profile at any given axial station is represented as a spatial Gaussian random field on an annulus, with circumferential variations modelled using a Fourier basis and radial variations modelled with a squared exponential kernel. This Gaussian random field is extended to ingest data from multiple axial measurement planes, with the aim of transferring information across the planes. To facilitate this type of transfer learning, a novel planar covariance kernel is proposed. In the scenario where frequencies comprising the temperature field are unknown, we utilise a sparsity-promoting prior on the frequencies to encourage sparse representations. This easily extends to cases with multiple engine planes whilst accommodating frequency variations between the planes. The main quantity of interest, the spatial area average is readily obtained in closed form. We term this the Bayesian area average and demonstrate how this metric offers far more representative averages than a sector area average---a widely used area averaging approach. Furthermore, the Bayesian area average naturally decomposes the posterior uncertainty into terms characterising insufficient sampling and sensor measurement error respectively. This too provides a significant improvement over prior standard deviation based uncertainty breakdowns.
We study the geometric and topological features of U-statistics of order k when the k-tuples satisfying geometric and topological constraints do not occur frequently. Using appropriate scaling, we establish the convergence of U-statistics in vague topology, while the structure of a non-degenerate limit measure is also revealed. Our general result shows various limit theorems for geometric and topological statistics, including persistent Betti numbers of Čech complexes, the volume of simplices, a functional of the Morse critical points, and values of the min-type distance function. The required vague convergence can be obtained as a result of the limit theorem for point processes induced by U-statistics. The latter convergence particularly occurs in the $\mathcal M_0$-topology.
Dense networks with weighted connections often exhibit a community-like structure, where although most nodes are connected to each other, different patterns of edge weights may emerge depending on each node’s community membership. We propose a new framework for generating and estimating dense weighted networks with potentially different connectivity patterns across different communities. The proposed model relies on a particular class of functions which map individual node characteristics to the edges connecting those nodes, allowing for flexibility while requiring a small number of parameters relative to the number of edges. By leveraging the estimation techniques, we also develop a bootstrap methodology for generating new networks on the same set of vertices, which may be useful in circumstances where multiple data sets cannot be collected. Performance of these methods is analyzed in theory, simulations, and real data.
While we know that adolescents tend to befriend peers who share their race and gender, it is unclear whether patterns of homophily vary according to the strength, intimacy, or connectedness of these relationships. By applying valued exponential random graph models to a sample of 153 adolescent friendship networks, I test whether tendencies towards same-race and same-gender friendships differ for strong versus weak relational ties. In nondiverse, primarily white networks, weak ties are more likely to connect same-race peers, while racial homophily is not associated with the formation of stronger friendships. As racial diversity increases, however, strong ties become more likely to connect same-race peers, while weaker bonds are less apt to be defined by racial homophily. Gender homophily defines the patterns of all friendship ties, but these tendencies are more pronounced for weaker connections. My results highlight the empirical value of considering tie strength when examining social processes in adolescent networks.
In this paper we study a class of optimal stopping problems under g-expectation, that is, the cost function is described by the solution of backward stochastic differential equations (BSDEs). Primarily, we assume that the reward process is $L\exp\bigl(\mu\sqrt{2\log\!(1+L)}\bigr)$-integrable with $\mu>\mu_0$ for some critical value $\mu_0$. This integrability is weaker than $L^p$-integrability for any $p>1$, so it covers a comparatively wide class of optimal stopping problems. To reach our goal, we introduce a class of reflected backward stochastic differential equations (RBSDEs) with $L\exp\bigl(\mu\sqrt{2\log\!(1+L)}\bigr)$-integrable parameters. We prove the existence, uniqueness, and comparison theorem for these RBSDEs under Lipschitz-type assumptions on the coefficients. This allows us to characterize the value function of our optimal stopping problem as the unique solution of such RBSDEs.
While tetanus toxoid vaccination has reduced the incidence of tetanus in the developed world, this disease remains a substantial health problem in developing nations. Tetanus immune globulin (TIG) is used along with vaccination for prevention of infection after major or contaminated wounds if vaccination status cannot be verified or for active tetanus infection. These studies describe the characterisation of a TIG produced by a caprylate/chromatography process. The TIG potency and presence of plasma protein impurities were analysed at early/late steps in the manufacturing process by chromatography, immunoassay, coagulation and potency tests. The caprylate/chromatography process has been previously shown to effectively eliminate or inactivate potentially transmissible agents from plasma-derived products. In this study, the caprylate/chromatography process was shown to effectively concentrate TIG activity and efficiently remove pro-coagulation factors, naturally present in plasma. This TIG drug product builds on the long-term evidence of the safety and efficacy of TIG by providing a product with higher purity and low pro-coagulant protein impurities.
We consider a sequence of Poisson cluster point processes on $\mathbb{R}^d$: at step $n\in\mathbb{N}_0$ of the construction, the cluster centers have intensity $c/(n+1)$ for some $c>0$, and each cluster consists of the particles of a branching random walk up to generation n—generated by a point process with mean 1. We show that this ‘critical cluster cascade’ converges weakly, and that either the limit point process equals the void process (extinction), or it has the same intensity c as the critical cluster cascade (persistence). We obtain persistence if and only if the Palm version of the outgrown critical branching random walk is locally almost surely finite. This result allows us to give numerous examples for persistent critical cluster cascades.
Sexual propagation of Agave plants is an incipient cultivation method, these plants withstand drought and adverse growing conditions; therefore, research on Agave’s diversity, seed processing, and storage could support its cultivation on marginal lands. The aim of this work was to evaluate seed morphology, germination, and seedling genetic diversity of six seed origins (species × provenance) of Agave plants collected in five provenances from Mexico. Seed longevity was evaluated in two seed origins after a 10-year storage period. Seed morphology and seedling genetic variation results demonstrated intra- and interspecific variation within Agave salmiana and with the other seed origins, respectively. After a 10-year storage period seed germination of two A. salmiana seed origins remained relatively stable, storage conditions, and seed variables of this work can serve as reference parameters for future analyses. To the best authors’ knowledge, this is the first report of Agave’s seed longevity evaluation after a 10-year storage period.
Researchers have found that although external attacks, exogenous shocks, and node knockouts can disrupt networked systems, they rarely lead to the system’s collapse. Although these processes are widely understood, most studies of how exogenous shocks affect networks rely on simulated or observational data. Thus, little is known about how groups of real individuals respond to external attacks. In this article, we employ an experimental design in which exogenous shocks, in the form of the unexpected removal of a teammate, are imposed on small teams of people who know each other. This allows us to causally identify the removed individual’s contribution to the team structure, the effect that an individual had on those they were connected, and the effect of the node knockout on the team. At the team level, we find that node knockouts decrease overall internal team communication. At the individual level, we find that node knockouts cause the remaining influential players to become more influential, while the remaining peripheral players become more isolated within their team. In addition, we also find that node knockouts may have a nominal influence on team performance. These findings shed light on how teams respond and adapt to node knockouts.
The bootComb R package allows researchers to derive confidence intervals with correct target coverage for arbitrary combinations of arbitrary numbers of independently estimated parameters. Previous versions (<1.1.0) of bootComb used independent bootstrap sampling and required that the parameters themselves are independent—an unrealistic assumption in some real-world applications.
Findings
Using Gaussian copulas to define the dependence between parameters, the bootComb package has been extended to allow for dependent parameters.
Implications
The updated bootComb package can now handle cases of dependent parameters, with users specifying a correlation matrix defining the dependence structure. While in practice it may be difficult to know the exact dependence structure between parameters, bootComb allows running sensitivity analyses to assess the impact of parameter dependence on the resulting confidence interval for the combined parameter.
This paper provides a full classification of the dynamics for continuous-time Markov chains (CTMCs) on the nonnegative integers with polynomial transition rate functions and without arbitrary large backward jumps. Such stochastic processes are abundant in applications, in particular in biology. More precisely, for CTMCs of bounded jumps, we provide necessary and sufficient conditions in terms of calculable parameters for explosivity, recurrence versus transience, positive recurrence versus null recurrence, certain absorption, and implosivity. Simple sufficient conditions for exponential ergodicity of stationary distributions and quasi-stationary distributions as well as existence and nonexistence of moments of hitting times are also obtained. Similar simple sufficient conditions for the aforementioned dynamics together with their opposite dynamics are established for CTMCs with unbounded forward jumps. Finally, we apply our results to stochastic reaction networks, an extended class of branching processes, a general bursty single-cell stochastic gene expression model, and population processes, none of which are birth–death processes. The approach is based on a mixture of Lyapunov–Foster-type results, the classical semimartingale approach, and estimates of stationary measures.