To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In regressions where the dependent variable takes limited values such as 0 and 1, or takes some category values, using the OLS estimation method will likely provide biased and inconsistent results. Because the dependent variable is either discontinuous or its range is bounded, one of the assumptions of the CLRM is violated (that the standard error is normally distributed conditional on the independent variables). This chapter focuses on limited dependent-variable models, for example, covering firm decision-making, capital structure decisions, investor decision-making, and so on. The chapter presents and discusses the linear probability model, maximum-likelihood estimator, probit model, logit model, ordered probit and logit models, multinomial logit model, conditional logit model, tobit model, Heckman selection model, and count data models. It covers the assumptions behind and applications of these models. As usual, the chapter provides an application of selected limited dependent-variable models, lab work, and a mini case study.
Corporate finance research requires close consideration of the assumptions underlying the econometric models applied to test hypotheses. This is partly because, as the field has evolved, more complex relationships have been examined, some of which pose problems of endogeneity. Endogeneity is one problem that violates the assumptions of the CLRM. It is so central that this book devotes a whole chapter to discussing it. The chapter covers the sources of endogeneity bias and the most commonly used methods that can be applied to cross-sectional data to deal with the endogeneity problem in corporate finance. These methods cover two-stage least squares (so-called IV approach), treatment effects, matching techniques, and regression discontinuity design (RDD). An application is provided for an IV approach and an RDD approach. The chapter ends with an application of the most common methods to real data, lab work, and a mini case study.
Data management concerns collecting, processing, analyzing, organizing, storing, and maintaining the data you collect for a research design. The focus in this chapter is on learning how to use Stata and apply data-management techniques to a provided dataset. No previous knowledge is required for the applications. The chapter goes through the basic operations for data management, including missing-value analysis and outlier analysis. It then covers descriptive statistics (univariate analysis) and bivariate analysis. Finally, it ends by discussing how to merge and append datasets. This chapter is important to proceed with the applications, lab work, and mini case studies in the following chapters, since it is a means to become familiar with the software. Stata codes are provided in the main text. For those who are interested in using Python or R instead, the corresponding code is provided on the online resources page (www.cambridge.org/mavruk).
Decentralized consensus protocols have a variety of parameters to be set during their deployment for practical applications in blockchains. The analysis given in most research papers proves the security state of the blockchain, at the same time usually providing a range of acceptable values, thus allowing further tuning of the protocol parameters. In this paper, we investigate Ouroboros Praos, the proof-of-stake consensus protocol deployed in Cardano and other blockchains. In contrast to its predecessor, Praos allows multiple honest slot leaders that lead to fork creation and resolution, consequently decreasing the block rate per time unit. In our analysis of dependence on protocol parameters such as active slot coefficient and p2p network block propagation time, we obtain new theoretical results and explicit formulas for the expectation of the length of the longest chain created during the Praos epoch, the length of the longest unintentional fork created by honest slot leaders, the efficiency of block generation procedure (the ratio of blocks included in the final longest chain vs the total number of created blocks), and other characteristics of the blockchain throughput.
We study these parameters as stochastic characteristics of the block generation process. The model is described in terms of the two-parametric family ξij of independent Bernoulli random variables which generate deformation of the binomial distribution by a positive integer parameter—the delay (deterministic or random). An essential part of our paper is a study of this deformation in terms of denumerable Markov chains and generating functions.
This chapter provides an introduction to the book. The book aims to deepen the reader’s (Bachelor and higher) understanding of empirical research in corporate finance studies and improve their ability to apply econometric methods in their own studies. It may not be general enough for an econometrics course for all finance students, including those interested in asset-pricing studies. However, some of the examples in the book cover studies of the behavior of individual/institutional investors and how this relates to the cost of capital of firms. This link is important to understand in empirical corporate finance studies. The chapter provides a short discussion on this link and then provides a detailed outline of the book. The book is a practical method book, covering essential basic econometric models to illustrate how to apply them in research, closely following some of the well-written and pedagogical books in econometrics.
Previous chapters aimed to present different research designs and econometric models used in empirical corporate finance studies. In this chapter, the focus is on the structure and writing of your research findings. Good writing is key to conveying the findings from your research to readers. You should be able to demonstrate your critical and analytical skills, and discuss the results from your research in a structured way when writing your thesis or academic paper. This chapter discusses the details of the sections included in empirical papers, and thus presents a standard example of the structure of an empirical corporate finance paper. This structure is general and may differ depending on the type of empirical paper and the field. However, beginning with the standard content of the sections will help you to better structure your ideas and writing. The chapter ends by providing some writing suggestions.
Survey studies offer a balance between large-sample analysis and more sample-specific studies, since they can be based on a large sample of cross-sectional companies but at the same time they allow us to ask specific qualitative questions of the respondents. Survey studies also allow us to measure and quantify decision-making processes and beliefs. Thus, survey data analysis can be seen as a bridge connecting qualitative studies to quantitative studies in corporate finance research. This chapter covers the most commonly used techniques in survey data analysis. In particular, it focuses on the assumptions and applications of principal components analysis (PCA), but also briefly explains factor analysis. The chapter also briefly discusses the similarities and differences between these two methods. The chapter includes an example of an application of PCA to ownership concentration by examining the different dimensions of ownership concentration. Finally, lab work on PCA and a mini case study are provided.
Panel data consist of multiple observations for each unit in the data. The units can be investors, firms, households, and so on. Panel datasets that allow us to follow these units over time provide intuitive understanding of the unit’s behavior. The panel-data analysis tends to be better at addressing the causality issues in research than cross-sectional data. This chapter provides a wide range of examples of panel-data techniques, with the main focus on linear panel-data models. It covers pooled OLS estimators, the fixed-effects model, least-squares dummy variable estimator, difference-in-differences model, between estimator, random-effects model, Hausman–Taylor random-effects IV method, and briefly the dynamic panel-data models. The chapter also reviews stationarity and the generalized method of moments (GMM) briefly. An application of linear panel-data models, as well as lab work and a mini case study, are provided at the end of the chapter.
The aim with regression analysis is to summarize the observed data and study how the response of a dependent variable varies as the values of the independent variable(s) change. There are many models that examine this relationship by obtaining the estimates of parameters in a regression model. The classical linear regression model (CLRM) is the basis of all the other models discussed in this book. This chapter discusses the CLRM in detail using the ordinary least squares (OLS) estimation method. The outcome of OLS can also be used as a benchmark in more advanced analysis. The focus is on the assumptions and applications of this technique, starting from a single-regression model with one independent variable and then covering multiple linear regression models with many independent variables. The chapter provides an application to the capital asset pricing model, lab work on the CLRM, and a mini case study.
ML methods are increasingly being used in (corporate) finance studies, with impressive applications. ML methods can be applied with the aim of reducing prediction error in the models, but can also be used to extend the existing traditional econometric methods. The performance of the ML models depends on the quality of the input data and the choice of model. There are many ML models, but all come with their own specific details. It is therefore essential to select accurate model(s) for the analysis. This chapter briefly reviews some broad types of ML methods. It covers supervised learning, which tends to achieve superior prediction performance by using more flexible functional forms than OLS in the prediction model. It explains unsupervised learning methods that derive and learn structural information from conventional data. Finally, the chapter also discusses some limitations and drawbacks of ML, as well as potential remedies.
Event studies are commonly applied in corporate finance, with a focus on testing market efficiency hypotheses and evaluating the effects of corporate decisions on firm values, stock prices, and other outcome variables. The chapter discusses the event-study model using examples from (i) return predictability literature; (ii) the effects of firm-level and macro news on stock returns, testing semi-strong efficiency; as well as (iii) insider trading, testing the strong form of efficiency. In short-term event studies the chapter reviews abnormal (AR) and cumulative abnormal return (CAR) calculations and discusses statistical tests of ARs and CARs. It also covers long-term event studies and discusses the buy-and-hold abnormal returns as well as the calendar-time portfolio approach. The chapter provides an application of a short-term event study by examining how stock prices respond to the news of a CEO’s departure. The chapter ends with lab work and a mini case study.