To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The study aims were to present in vitro susceptibilities of clinical isolates from Gram-negative bacteria bloodstream infections (GNBSI) collected in China. GNBSI isolates were collected from 18 tertiary hospitals in 7 regions of China from 2018 to 2020. Minimum inhibitory concentrations were assessed using a Trek Diagnostic System. Susceptibility was determined using CLSI broth microdilution, and breakpoints were interpreted using CLSI M100 (2021). A total of 1,815 GNBSI strains were collected, with E. coli (42.4%) and Klebsiella pneumoniae (28.6%) being the most prevalent species, followed by P. aeruginosa (6.7%). Susceptibility analyses revealed low susceptibilities (<40%) of ESBL-producing E. coli and K. pneumonia to third-/fourth-generation cephalosporins, monobactamases, and fluoroquinolones. High susceptibilities to colistin (95.0%) and amikacin (81.3%) were found for K. pneumoniae, while Acinetobacter baumannii exhibited a high susceptibility (99.2%) to colistin but a low susceptibility to other antimicrobials (<27.5%). Isolates from ICUs displayed lower drug susceptibility rates of K. pneumoniae and A. baumannii than isolates from non-ICUs (all P < 0.05). Carbapenem-resistant and ESBL-producing K. pneumoniae detection was different across regions (both P < 0.05). E. coli and K. pneumoniae were major contributors to GNBSI, while A. baumannii exhibited severe drug resistance in isolates obtained from ICU departments.
A tantalizing open problem, posed independently by Stiebitz in 1995 and by Alon in 1996 and again in 2006, asks whether for every pair of integers $s,t \ge 1$ there exists a finite number $F(s,t)$ such that the vertex set of every digraph of minimum out-degree at least $F(s,t)$ can be partitioned into non-empty parts $A$ and $B$ such that the subdigraphs induced on $A$ and $B$ have minimum out-degree at least $s$ and $t$, respectively.
In this short note, we prove that if $F(2,2)$ exists, then all the numbers $F(s,t)$ with $s,t\ge 1$ exist and satisfy $F(s,t)=\Theta (s+t)$. In consequence, the problem of Alon and Stiebitz reduces to the case $s=t=2$. Moreover, the numbers $F(s,t)$ with $s,t \ge 2$ either all exist and grow linearly, or all of them do not exist.
We use recent advances in polynomial diffusion processes to develop a continuous-time joint mortality model for the actuarial valuation and risk analysis of life insurance liabilities. The model considers the stochastic nature of future mortality improvements and introduces a common subordinator for the marginal survival processes, resulting in a nontrivial dependence structure between the survival of pairs of individuals. Polynomial diffusion processes can be used to derive closed-form formulae for standard actuarial quantities. The model fits well with a classic dataset provided by a Canadian insurer and can be used to evaluate products issued to multiple lives, as shown through numerical applications.
This study proposes two novel time-varying model-averaging methods for time-varying parameter regression models. When the number of predictors is small, we propose a novel time-varying complete subset-averaging (TVCSA) procedure, where the optimal time-varying subset size is obtained by minimizing the local leave-h-out cross-validation criterion. The TVCSA method is asymptotically optimal for achieving the lowest possible local mean squared error. When the number of predictors is relatively large, we propose a factor TVCSA method to reduce the computational burden by first reducing the dimension of predictors by extracting a few factors using principal component analysis and then obtaining the TVCSA forecasts from time-varying models with the generated factors. We show that the TVCSA estimator remains asymptotically optimal in the presence of generated factors. Monte Carlo simulation studies have provided favorable evidence for the TVCSA methods relative to the popular model-averaging methods in the literature. Empirical applications to equity premiums and inflation forecasting highlight the practical merits of the proposed methods.
Invasive group A Streptococcal (iGAS) outbreaks have been linked to Community Healthcare Services Delivered at Home (CHSDH). There is, however, very limited evidence describing the epidemiology and mortality of iGAS cases associated with CHSDH. We used routine data to describe iGAS cases in adults who had received CHSDH prior to onset and compare characteristics between CHSDH-outbreak and non-outbreak CHSDH cases, in South East England between December 2021 and December 2023. There were 80/898 (8.9%) iGAS case episodes with CHSDH prior to onset; cases were in elderly people (50% aged 85 and over), and had primarily received wound or ulcer care (93.8%), with almost all care delivered by community nurses (98.8%). The 30-day all-cause case fatality was 26.3%. Emm 1.0 was the most common type (17.5%). In this period, 5/11 iGAS outbreaks (45.4%) were CHSDH-associated, and 25 cases with receipt of CHSDH prior to onset (31.3%, Confidence Interval [CI] 21.3–42.6%) were linked to these outbreaks. On univariate analysis, CHSDH-outbreak case episodes were more likely to be associated with emm pattern genotype E (OR 6.1 95% CI 1.8–20.9), and skin or soft tissue infection clinical presentation (OR 3.6, 95% CI 1.1–12.0) than non-outbreak CHSDH cases. There may be an increased risk of propagation of iGAS outbreaks in patients receiving CHSDH, emphasizing the need for rigorous early infection prevention and control, and outbreak surveillance.
Focused on empirical methods and their applications to corporate finance, this innovative text equips students with the knowledge to analyse and critically evaluate quantitative research methods in corporate finance, and conduct computer-aided statistical analyses on various types of datasets. Chapters demonstrate the application of basic econometric models in corporate finance (as opposed to derivations or theorems), backed up by relevant research. Alongside practical examples and mini case studies, computer lab exercises enable students to apply the theories of corporate finance and make stronger connections between theory and practice, while developing their programming skills. All of the Stata code is provided (with corresponding Python and R code available online), so students of all programming abilities can focus on understanding and interpreting the analyses.
Bringing together years of research into one useful resource, this text empowers the reader to creatively construct their own dependence models. Intended for senior undergraduate and postgraduate students, it takes a step-by-step look at the construction of specific dependence models, including exchangeable, Markov, moving average and, in general, spatio-temporal models. All constructions maintain a desired property of pre-specifying the marginal distribution and keeping it invariant. They do not separate the dependence from the marginals and the mechanisms followed to induce dependence are so general that they can be applied to a very large class of parametric distributions. All the constructions are based on appropriate definitions of three building blocks: prior distribution, likelihood function and posterior distribution, in a Bayesian analysis context. All results are illustrated with examples and graphical representations. Applications with data and code are interspersed throughout the book, covering fields including insurance and epidemiology.
This chapter aims to prepare the reader for the models, applications, lab work, and mini case studies in the coming chapters. The focus is on sample selection, identification strategy, and hypothesis development. The chapter first covers some terminology and then discusses data types, units of analysis, data management, and different sampling methods. The sample-selection part explores the steps in a well-structured sample design. The identification strategy part covers the causal relationship of interest, ideal experiments, and statistical inference. This part is of particular significance because, in corporate finance research, it is important that the hypothesis is closely tied to economic theory and the previous literature. It is only then that we can draw meaningful conclusions from the studied relationships and deductions follow from hypotheses. The chapter ends with a hypothesis development section that details some decision/rejection rules. Stata codes are provided for the examples.
A time series contains the values of a dataset sampled at different points in time. Some examples in financial research include asset prices, volatility indices, inflation rates, revenues, and so on. This chapter briefly covers the basic methods used in time-series analysis. Issues include whether the time-series data have equally spaced intervals, whether there is noise or error, how quickly the series grows, and whether the series has missing values. The chapter begins by testing for autocorrelation and remedies for autocorrelation. It then presents some standard tests for stationarity and cointegration, briefly covering random walks and the unit-root test. The models covered, among others, include autoregressive distributed lag (ARDL), autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA), generalized autoregressive conditional heteroskedasticity (GARCH), and vector autoregressive (VAR) models. The chapter provides an application to mortgage rates and ends with lab work and a mini case study.
In regressions where the dependent variable takes limited values such as 0 and 1, or takes some category values, using the OLS estimation method will likely provide biased and inconsistent results. Because the dependent variable is either discontinuous or its range is bounded, one of the assumptions of the CLRM is violated (that the standard error is normally distributed conditional on the independent variables). This chapter focuses on limited dependent-variable models, for example, covering firm decision-making, capital structure decisions, investor decision-making, and so on. The chapter presents and discusses the linear probability model, maximum-likelihood estimator, probit model, logit model, ordered probit and logit models, multinomial logit model, conditional logit model, tobit model, Heckman selection model, and count data models. It covers the assumptions behind and applications of these models. As usual, the chapter provides an application of selected limited dependent-variable models, lab work, and a mini case study.
Corporate finance research requires close consideration of the assumptions underlying the econometric models applied to test hypotheses. This is partly because, as the field has evolved, more complex relationships have been examined, some of which pose problems of endogeneity. Endogeneity is one problem that violates the assumptions of the CLRM. It is so central that this book devotes a whole chapter to discussing it. The chapter covers the sources of endogeneity bias and the most commonly used methods that can be applied to cross-sectional data to deal with the endogeneity problem in corporate finance. These methods cover two-stage least squares (so-called IV approach), treatment effects, matching techniques, and regression discontinuity design (RDD). An application is provided for an IV approach and an RDD approach. The chapter ends with an application of the most common methods to real data, lab work, and a mini case study.
Data management concerns collecting, processing, analyzing, organizing, storing, and maintaining the data you collect for a research design. The focus in this chapter is on learning how to use Stata and apply data-management techniques to a provided dataset. No previous knowledge is required for the applications. The chapter goes through the basic operations for data management, including missing-value analysis and outlier analysis. It then covers descriptive statistics (univariate analysis) and bivariate analysis. Finally, it ends by discussing how to merge and append datasets. This chapter is important to proceed with the applications, lab work, and mini case studies in the following chapters, since it is a means to become familiar with the software. Stata codes are provided in the main text. For those who are interested in using Python or R instead, the corresponding code is provided on the online resources page (www.cambridge.org/mavruk).
Decentralized consensus protocols have a variety of parameters to be set during their deployment for practical applications in blockchains. The analysis given in most research papers proves the security state of the blockchain, at the same time usually providing a range of acceptable values, thus allowing further tuning of the protocol parameters. In this paper, we investigate Ouroboros Praos, the proof-of-stake consensus protocol deployed in Cardano and other blockchains. In contrast to its predecessor, Praos allows multiple honest slot leaders that lead to fork creation and resolution, consequently decreasing the block rate per time unit. In our analysis of dependence on protocol parameters such as active slot coefficient and p2p network block propagation time, we obtain new theoretical results and explicit formulas for the expectation of the length of the longest chain created during the Praos epoch, the length of the longest unintentional fork created by honest slot leaders, the efficiency of block generation procedure (the ratio of blocks included in the final longest chain vs the total number of created blocks), and other characteristics of the blockchain throughput.
We study these parameters as stochastic characteristics of the block generation process. The model is described in terms of the two-parametric family ξij of independent Bernoulli random variables which generate deformation of the binomial distribution by a positive integer parameter—the delay (deterministic or random). An essential part of our paper is a study of this deformation in terms of denumerable Markov chains and generating functions.
This chapter provides an introduction to the book. The book aims to deepen the reader’s (Bachelor and higher) understanding of empirical research in corporate finance studies and improve their ability to apply econometric methods in their own studies. It may not be general enough for an econometrics course for all finance students, including those interested in asset-pricing studies. However, some of the examples in the book cover studies of the behavior of individual/institutional investors and how this relates to the cost of capital of firms. This link is important to understand in empirical corporate finance studies. The chapter provides a short discussion on this link and then provides a detailed outline of the book. The book is a practical method book, covering essential basic econometric models to illustrate how to apply them in research, closely following some of the well-written and pedagogical books in econometrics.
Previous chapters aimed to present different research designs and econometric models used in empirical corporate finance studies. In this chapter, the focus is on the structure and writing of your research findings. Good writing is key to conveying the findings from your research to readers. You should be able to demonstrate your critical and analytical skills, and discuss the results from your research in a structured way when writing your thesis or academic paper. This chapter discusses the details of the sections included in empirical papers, and thus presents a standard example of the structure of an empirical corporate finance paper. This structure is general and may differ depending on the type of empirical paper and the field. However, beginning with the standard content of the sections will help you to better structure your ideas and writing. The chapter ends by providing some writing suggestions.