To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Elements of Structural Equation Models (SEMs) blends theoretical foundations with practical applications, serving as both a learning tool and a lasting reference. Synthesizing material from diverse sources, including the author's own contributions, it provides a rigorous yet accessible guide for graduate students, faculty, and researchers across social, behavioral, health, and data sciences. The book covers essential SEM concepts – model assumptions, identification, estimation, and diagnostics – while also addressing advanced topics often overlooked, such as Bayesian SEMs, model-implied instrumental variables, and categorical variables. Readers will gain insights into missing data, longitudinal models, and comparisons with Directed Acyclic Graphs (DAGs). By presenting complex technical content in a clear, structured way, this authoritative resource deepens readers' understanding of SEMs, making it an indispensable guide for both newcomers and experts seeking a definitive treatment of the field.
How can admissions officers, employers, and scholarship committees maximize the accuracy of prediction of individual performance while minimizing adverse impact due to group differences? Testing offers a straightforward solution to the first half of this problem. Tests are the best way to predict how someone will perform in school, in the military, in medicine, or while controlling airline traffic and flying a plane. Tests are also useful beyond personnel selection, such as for selection of a college major or courses. However, the other side of this problem is more complex. Using tests is always accompanied by group differences that could result in continued systemic discrimination by limiting opportunities for those who are marginalized. This book charts an approach to using tests that incorporates evidence, transparency, and societal values to maximize efficiency and fairness.
What makes populism both a threat and a corrective to democracy in India, setting it apart from other contexts? A Logic of Populism explores this question using a novel set-theoretic methodology and a comprehensive study of populist leaders across Indian states. It defines populists as those who draw boundaries dividing people, while democratic institutions shape these divisions' political significance. Populists create fractures, yet democratic engagement channels these conflicts toward the common good. This book is essential for those seeking to understand Indian democracy and populism's role in political modernization beyond Western perspectives. It is particularly valuable for researchers in qualitative methodologies and theory-building in the Social Sciences. By conceptualizing populism as a defining force in contemporary public affairs, the book offers crucial insights into democracy's evolving landscape in India, making it a significant contribution to political studies and governance discourse.
This authoritative guide directs consumers and users of test scores on when and how to provide subscores and how to make informed decisions based on them. The book is designed to be accessible to practitioners and score users with varying levels of technical expertise, from executives of testing organizations and students who take tests to graduate students in educational measurement, psychometricians, and test developers. The theoretical background required to evaluate subscores and improve them are provided alongside examples of tests with subscores to illustrate their use and misuse. The first chapter covers the history of tests, subtests, scores, and subscores. Later chapters go into subscore reporting, evaluating and improving the quality of subscores, and alternatives to subscores when they are not appropriate. This thorough introduction to the existing research and best practices will be useful to graduate students, researchers, and practitioners.
Statistics Using Stata uses a highly accessible and lively writing style to seamlessly integrate the learning of the latest version of Stata (17) with an introduction to applied statistics using real data in the behavioral, social, and health sciences. The text is comprehensive in its content coverage and is suitable at undergraduate and graduate levels. It requires knowledge of basic algebra, but no prior coding experience. It is uniquely focused on the importance of data management as an underlying and key principle of data analysis. It includes a .do-file for each chapter, that was used to generate all figures, tables, and analyses for that chapter. These files are intended as models to be adapted and used by readers in conducting their own research. Additional teaching and learning aids include solutions to all end-of-chapter exercises and PowerPoint slides to highlight the important take-aways of each chapter.
Statistics Using R introduces the most up-to-date approaches to R programming alongside an introduction to applied statistics using real data in the behavioral, social, and health sciences. It is uniquely focused on the importance of data management as an underlying and key principle of data analysis. It includes an online R tutorial for learning the basics of R, as well as two R files for each chapter, one in Base R code and the other in tidyverse R code, that were used to generate all figures, tables, and analyses for that chapter. These files are intended as models to be adapted and used by readers in conducting their own research. Additional teaching and learning aids include solutions to all end-of-chapter exercises and PowerPoint slides to highlight the important take-aways of each chapter. This textbook is appropriate for both undergraduate and graduate students in social sciences, applied statistics, and research methods.
Benford's Law is a probability distribution for the likelihood of the leading digit in a set of numbers. This book seeks to improve and systematize the use of Benford's Law in the social sciences to assess the validity of self-reported data. The authors first introduce a new measure of conformity to the Benford distribution that is created using permutation statistical methods and employs the concept of statistical agreement. In a switch from a typical Benford application, this book moves away from using Benford's Law to test whether the data conform to the Benford distribution, to using it to draw conclusions about the validity of the data. The concept of 'Benford validity' is developed, which indicates whether a dataset is valid based on comparisons with the Benford distribution and, in relation to this, diagnostic procedure that assesses the impact of not having Benford validity on data analysis is devised.
There have been tremendous advancements in technology-based assessments in new modes of data collection and the use of artificial intelligence. Traditional assessment techniques in the fields of psychology, business, education, and health need to be reconsidered. Yet, while technology is pervasive, its spread is not consistent due to national differences in economics and culture. Given these trends, this book offers an integrative consolidation of how technology is changing the face of assessments across different regions of the world. There are three major book sections: in foundations, core issues of computational models, passively sensed data, and privacy concerns are discussed; in global perspectives, the book identifies ways technology has changed how we assess human attributes across the world, and finally, in regional focus, the book surveys how different regions around the world have adopted technology-based assessments for their unique cultural and societal context.
The size and availability of network information has exploded over the last decade. Social scientists now share the stage of network analysis with computer scientists, physicists, and statisticians. While a number of introductions to network analysis are now available, most focus on theory, methods, or application alone. This book integrates all three. Network Analysis is an introduction to both the why and how of Social Network Analysis (SNA). It presents a broad theoretical overview rooted in social scientific approaches and guides users in how network analysis can answer core theoretical questions. It provides a comprehensive overview of descriptive and analytical approaches, including practical tutorials in R with sample data sets. Using an integrated approach, this book aims to quickly bring novice network researchers up to speed while avoiding common programming and analysis mistakes so that they might gain insight into the fundamental theories, key concepts, and methodological application of SNA.
All social and policy researchers need to synthesize data into a visual representation. Producing good visualizations combines creativity and technique. This book teaches the techniques and basics to produce a variety of visualizations, allowing readers to communicate data and analyses in a creative and effective way. Visuals for tables, time series, maps, text, and networks are carefully explained and organized, showing how to choose the right plot for the type of data being analysed and displayed. Examples are drawn from public policy, public safety, education, political tweets, and public health. The presentation proceeds step by step, starting from the basics, in the programming languages R and Python so that readers learn the coding skills while simultaneously becoming familiar with the advantages and disadvantages of each visualization. No prior knowledge of either Python or R is required. Code for all the visualizations are available from the book's website.
This book addresses the role of statistics and probability in the evaluation of forensic evidence, including both theoretical issues and applications in legal contexts. It discusses what evidence is and how it can be quantified, how it should be understood, and how it is applied (and, sometimes, misapplied). After laying out their philosophical position, the authors begin with a detailed study of the likelihood ratio. Following this grounding, they discuss applications of the likelihood ratio to forensic questions, in the abstract and in concrete cases. The analysis of DNA evidence in particular is treated in great detail. Later chapters concern Bayesian networks, frequentist approaches to evidence, the use of belief functions, and the thorny subject of database searches and familial searching. Finally, the authors provide commentary on various recommendation reports for forensic science. Written to be accessible to a wide audience of applied mathematicians, forensic scientists, and scientifically-oriented legal scholars, this book is a must-read for all those interested in the mathematical and philosophical foundations of evidence and belief.
The second edition of Statistics for the Social Sciences prepares students from a wide range of disciplines to interpret and learn the statistical methods critical to their field of study. By using the General Linear Model (GLM), the author builds a foundation that enables students to see how statistical methods are interrelated enabling them to build on the basic skills. The author makes statistics relevant to students' varying majors by using fascinating real-life examples from the social sciences. Students who use this edition will benefit from clear explanations, warnings against common erroneous beliefs about statistics, and the latest developments in the philosophy, reporting, and practice of statistics in the social sciences. The textbook is packed with helpful pedagogical features including learning goals, guided practice, and reflection questions.
This book looks at how numbers and statistics have been used to underpin quality in news reporting. In doing so, the aim is to challenge some common assumptions about how journalists engage and use statistics in their quest for quality news. It seeks to improve our understanding about the usage of data and statistics as a primary means for the construction of social reality. This is a task, in our view, that is urgent in times of 'post-truth' politics and the rise of 'fake news'. In this sense, the quest to produce 'quality' news, which seems to require incorporating statistics and engaging with data, as laudable and straightforward as it sounds, is instead far more problematic and complex than what is often accounted for.
Building upon the success of the first edition, Statistics Using Stata uses the latest version of Stata to meet the needs of today's students. Engaging and accessible for students from a variety of mathematical backgrounds, this textbook integrates statistical concepts with the Stata (version 16) software package. It aligns Stata commands with examples based on real data, enabling students to understand statistics in a way that reflects statistical practice. Capitalizing on Stata's menu-driven 'point and click' and program syntax interface, the chapters guide students from the comfortable 'point and click' environment to the beginnings of statistical programming. Its coverage of essential topics gives instructors flexibility in curriculum planning and provides students with more advanced material to prepare for future work. Online resources - including solutions to exercises, PowerPoint slides, and Stata syntax (do-files) for each chapter - allow students to review independently and adapt code to analyze new problems.
Using numerous examples with real data, this textbook closely integrates the learning of statistics with the learning of R. It is suitable for introductory-level learners, allows for curriculum flexibility, and includes, as an online resource, R-code script files for all examples and figures included in each chapter, for students to learn from and adapt and use in their future data analytic work. Other unique features created specifically for this textbook include an online R tutorial that introduces readers to data frames and other basic elements of the R architecture, and a CRAN library of datasets and functions that is used throughout the book. Essential topics often overlooked in other introductory texts, such as data management, are covered. The textbook includes online solutions to all end-of-chapter exercises and PowerPoint slides for all chapters as additional resources, and is suitable for those who do not have a strong background in mathematics.
Measuring Justice explores the ways in which South African court and managerial prosecutors deal with the quantification of social phenomena - such as justice, professional work or accountability - and address the radical simplifications of their inherent complexities, misrepresentations and editing as a consequence. While various studies show the concern of professionals about the damaging effects these quantitative forms of accountability have on the creativity, freedom and collaborative nature of expert systems, Mugler shows that the reactions and attitudes of these legal professionals differ substantially. Through careful scrutiny of the everyday work of prosecutors and how they reflect on the relationship between accountability, quantification and law, this book argues that actors who work daily with quantitative accountability measures develop a numerical reflexivity about the process.
How did Gross domestic product (GDP) become the world's most influential indicator? Why does it still remain the primary measure of societal progress despite being widely criticised for not considering well-being or sustainability? Why have the many beyond-GDP alternatives not managed to effectively challenge GDP's dominance? The success of GDP and the failure of beyond-GDP lies in their underlying communities. The macro-economic community emerged in the aftermath of the Great Depression and WWII. This community formalised their 'language' in the System of National Accounts (SNA) which provided the global terminology with which to communicate. On the other hand, beyond-GDP is a heterogeneous community which speaks in many dialects, accents and languages. Unless this changes, the 'beyond-GDP cottage industry' will never beat the 'GDP-multinational'. This book proposes a new roadmap to 2030, detailing how to create a multidisciplinary Wellbeing and Sustainability Science (WSS) with a common language, the System of Global and National Accounts (SGNA).
The primary data driver behind US drug policy is the National Survey on Drug Use and Health. This insider history traces the evolution of the survey and how the survey has interacted with the political and social climate of the country, from its origins during the Vietnam War to its role in the war on drugs. The book includes first-hand accounts that explain how the data was used and misused by political leaders, why changes were made in the survey design, and what challenges researchers faced in communicating statistical principles to policymakers and leaders. It also makes recommendations for managing survey data collection and reporting in the context of political pressures and technological advances. Survey research students and practitioners will learn practical lessons about questionnaire design, mode effects, sampling, nonresponse, weighting, editing, imputation, statistical significance, and confidentiality. The book also includes common-language explanations of key terms and processes to help data users understand the point of view of survey statisticians.
Written by a quantitative psychologist, this textbook explains complex statistics in accessible language to undergraduates in all branches of the social sciences. Built around the central framework of the General Linear Model (GLM), Statistics for the Social Sciences teaches students how different statistical methods are interrelated to one another. With the GLM as a basis, students with varying levels of background are better equipped to interpret statistics and learn more advanced methods in their later courses. Russell T. Warne makes statistics relevant to students' varying majors by using fascinating real-life examples from the social sciences. Students who use this book will benefit from clear explanations, warnings against common erroneous beliefs about statistics, and the latest developments in the philosophy, reporting and practice of statistics in the social sciences. The textbook is packed with helpful pedagogical features including learning goals, guided practice and reflection questions.
The scientific advances that underpin economic growth and human health would not be possible without research investments. Yet demonstrating the impact of research programs is a challenge, especially in areas that span disciplines, industrial sectors, and encompass both public and private sector activity. All areas of research are under pressure to demonstrate benefits from federal funding of research. This exciting and innovative study demonstrates new methods and tools to trace the impact of federal research funding on the structure of research, and the subsequent economic activities of funded researchers. The case study is food safety research, which is critical to avoiding outbreaks of disease. The authors make use of an extraordinary new data infrastructure and apply new techniques in text analysis. Focusing on the impact of US federal food safety research, this book develops vital data-intensive methodologies that have a real world application to many other scientific fields.