We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This volume provides a practical introduction to the method of maximum likelihood as used in social science research. Ward and Ahlquist focus on applied computation in R and use real social science data from actual, published research. Unique among books at this level, it develops simulation-based tools for model evaluation and selection alongside statistical inference. The book covers standard models for categorical data as well as counts, duration data, and strategies for dealing with data missingness. By working through examples, math, and code, the authors build an understanding about the contexts in which maximum likelihood methods are useful and develop skills in translating mathematical statements into executable computer code. Readers will not only be taught to use likelihood-based tools and generate meaningful interpretations, but they will also acquire a solid foundation for continued study of more advanced statistical techniques.
Explains why model evaluation and selection must occur prior to inference and then develops the out-of-sample prediction heuristic for model evaluation.Specific attention paid to cross-validation.
Discusses problems induced by missing data and the multiple imputation strategies for dealing with it.Introduces and compares EM and Gaussian copula imputation methods.
Discusses the mechanics and computation for effectively interpreting theoutput of likelihood-based models in ways that are easy to understand and communicate.
Builds likelihood-based models for binary data and describes how we can evaluate the model and use sampling tools to generate meaningful interpretations of model quantities
Applies the GLM framework to modeling ordered categorical responses.Discusses the assumptions underlying the ordered logit/probit and provides diagnostics.Discusses categorical regressors.
Provides some of the details of numerical optimization as applied to likelihood functions and discusses possible problems, both computational and data-generated.
An overview of the basic concepts and components of likelihood-based modelling and introduces the process of maximizing a likelihood.We show that the OLS model can be derived in a likelihood framework.
Introduces the notion of dependence across observations in the context of teporal duration models.Describes BTSCS, parametric, andthe Cox model. Introduces split population duration models.
Applies the GLM framework to modeling unorded categorical responses. Discusses the IIA assumption for the mutinomial logit and the many tools developed for times when it fails.
Applies the GLM framework to modeling event count data. Discusses the common problem of overdispersion and the methods for extending the model to account for it.