We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Fully relativistic particle-in-cell (PIC) simulations are crucial for advancing our knowledge of plasma physics. Modern supercomputers based on graphics processing units (GPUs) offer the potential to perform PIC simulations of unprecedented scale, but require robust and feature-rich codes that can fully leverage their computational resources. In this work, this demand is addressed by adding GPU acceleration to the PIC code Osiris. An overview of the algorithm, which features a CUDA extension to the underlying Fortran architecture, is given. Detailed performance benchmarks for thermal plasmas are presented, which demonstrate excellent weak scaling on NERSC's Perlmutter supercomputer and high levels of absolute performance. The robustness of the code to model a variety of physical systems is demonstrated via simulations of Weibel filamentation and laser-wakefield acceleration run with dynamic load balancing. Finally, measurements and analysis of energy consumption are provided that indicate that the GPU algorithm is up to ${\sim }$14 times faster and $\sim$7 times more energy efficient than the optimized CPU algorithm on a node-to-node basis. The described development addresses the PIC simulation community's computational demands both by contributing a robust and performant GPU-accelerated PIC code and by providing insight into efficient use of GPU hardware.
Deep Springs Valley (DSV) is a hydrologically isolated valley between the White and Inyo mountains that is commonly excluded from regional paleohydrology and paleoclimatology. Previous studies showed that uplift of Deep Springs ridge (informal name) by the Deep Springs fault defeated streams crossing DSV and hydrologically isolated the valley sometime after eruption of the Pleistocene Bishop Tuff (0.772 Ma). Here, we present tephrochronology and clast counts that reaffirms interruption of the Pliocene–Pleistocene hydrology and formation of DSV during the Pleistocene. Paleontology and infrared stimulated luminescence (IRSL) dates indicate a freshwater lake inundated Deep Springs Valley from ca. 83–61 ka or during Late Pleistocene Marine Isotope Stages 5a (MIS 5a; ca. 82 ka peak) and 4 (MIS 4; ca. 71–57 ka). The age of pluvial Deep Springs Lake coincides with pluvial lakes in Owens Valley and Columbus Salt Marsh and documents greater effective precipitation in southwestern North America during MIS 5a and MIS 4. In addition, we hypothesize that Deep Springs Lake was a balanced-fill lake that overflowed into Eureka Valley via the Soldier Pass wind gap during MIS 5a and MIS 4. DSV hydrology has implications for dispersal and endemism of the Deep Springs black toad (Anaxyrus exsul).
Recruiting underrepresented people and communities in research is essential for generalizable findings. Ensuring representative participants can be particularly challenging for practice-level dissemination and implementation trials. Novel use of real-world data about practices and the communities they serve could promote more equitable and inclusive recruitment.
Methods:
We used a comprehensive primary care clinician and practice database, the Virginia All-Payers Claims Database, and the HealthLandscape Virginia mapping tool with community-level socio-ecological information to prospectively inform practice recruitment for a study to help primary care better screen and counsel for unhealthy alcohol use. Throughout recruitment, we measured how similar study practices were to primary care on average, mapped where practices’ patients lived, and iteratively adapted our recruitment strategies.
Results:
In response to practice and community data, we adapted our recruitment strategy three times; first leveraging relationships with residency graduates, then a health system and professional organization approach, followed by a community-targeted approach, and a concluding approach using all three approaches. We enrolled 76 practices whose patients live in 97.3% (1844 of 1907) of Virginia’s census tracts. Our overall patient sample had similar demographics to the state for race (21.7% vs 20.0% Black), ethnicity (9.5% vs 10.2% Hispanic), insurance status (6.4% vs 8.0% uninsured), and education (26.0% vs 32.5% high school graduate or less). Each practice recruitment approach uniquely included different communities and patients.
Discussion:
Data about primary care practices and the communities they serve can prospectively inform research recruitment of practices to yield more representative and inclusive patient cohorts for participation.
Operation based exercises represent simulation activities, which are of great importance for emergency preparedness, as they simulate real experiences in a guided manner. Whereas their primary purpose is to address the organizational emergency preparedness, little is known about the personal benefits of involved participants and whether these positive changes endure over time.
Methods:
Immediate and medium term assessment of the effectiveness on individual preparedness and benefits of participants, based on self-perception, after participating in a set of 4 interdisciplinary field exercises organized as part of the MSc in Global Health-Disaster Medicine of the Medical School of the National and Kapodistrian University of Athens, Greece. The field exercises were carried out yearly, from 2016 to 2019. Data were collected via questionnaires pre- and post-exercise (1 week and 10 months after participation). The sample size was 228 trainees, with a response rate of 88%.
Results:
The majority (95%) stated that Mass Casualty Incident (MCI) exercises are appropriate for disaster management training in terms of comprehending theory, and for team-building training. In the case of a real MCI, 22% of the participants declared themselves to be ready to respond prior to MCI exercises. Upon completion, the overall perception of readiness among the participants increased to 77%. Trainee feedback indicated enhancement of both technical and non-technical skills (87%), which were persistent over time, and revealed a high level of satisfaction with the training.
Conclusion:
This study shows a positive immediate and medium-term impact of operation-based exercises on technical, non-technical skills, and self-perception of participants.
Varieties of Democracy is the essential user's guide to The Varieties of Democracy project (V-Dem), one of the most ambitious data collection efforts in comparative politics. This global research collaboration sparked a dramatic change in how we study the nature, causes, and consequences of democracy. This book is ambitious in scope: more than a reference guide, it raises standards for causal inferences in democratization research and introduces new, measurable, concepts of democracy and many political institutions. Varieties of Democracy enables anyone interested in democracy - teachers, students, journalists, activists, researchers and others - to analyze V-Dem data in new and exciting ways. This book creates opportunities for V-Dem data to be used in education, research, news analysis, advocacy, policy work, and elsewhere. V-Dem is rapidly becoming the preferred source for democracy data.
Scholars and policy makers need systematic assessments of the validity of the measures produced by V-Dem. In Chapter 6, we present our approach to comparative data validation – the set of steps we take to evaluate the precision, accuracy, and reliability of our measures, both in isolation and compared to extant measures of the same concepts. Our approach assesses the degree to which measures align with shared concepts (content validation), shared rules of translation (data generation assessment), and shared realities (convergent validation). Within convergent validity, we execute two convergent validity tests. First, we examine convergent validity as it is typically conceived – examining convergence between V-Dem measures and extant measures. Second, we evaluate the level of convergence across coders, considering the individual coder and country traits that predict coder convergence. Throughout the chapter, we focus on three indices included in the V-Dem data set: polyarchy, corruption, and core civil society. These three concepts collectively provide a “hard test” for the validity of our data, representing a range of existing measurement approaches, challenges, and solutions.
This chapter sets forth the conceptual scheme for the V–Dem project. We begin by discussing the concept of democracy. Next, we lay out seven principles by which this key concept may be understood – electoral, liberal, majoritarian, consensual, participatory, deliberative, and egalitarian. Each defines a “variety“ of democracy, and together they offer a fairly comprehensive accounting of the concept as used in the world today. Next, we show how this seven-part framework fits into our overall thinking about democracy, including multiple levels of disaggregation – to components, subcomponents, and indicators. The final section of the chapter discusses several important caveats and clarifications pertaining to this ambitious taxonomic exercise.
This chapter recounts how a project of this scale came together and why it has succeeded. Five main factors were responsible for V–Dem’s success: timing, inclusion, deliberation, administrative centralization, and fund–raising. First, planning for V-Dem began at a time when both social scientists and practitioners were realizing that they needed better democracy measures. This made it possible to recruit collaborators and find funding. Second, the leaders of the project were always eager to expand the team to acquire whatever expertise they lacked and share credit with everyone who contributed. Third, the project leaders practiced an intensely deliberative decision–making style to ensure that all points of view were consulted and only decisions that won wide acceptance were adopted. Fourth, centralizing the execution of the agreed–upon tasks helped tremendously by streamlining processes and promoting standardization, documentation, professionalization, and coordination of a large number of intricate steps. Finally, successful fund–raising from a mix of both research foundations and bilateral and multilateral organizations has been critical.
In this chapter we focus on the measurement of five key principles of democracy – electoral, liberal, participatory, deliberative, and egalitarian. For each principle, we discuss (1) the theoretical rationale for the selected indicators, (2) whether these indicators are correlated strongly enough to warrant being collapsed into an index, and (3) the justification of aggregation rules for moving from indicators to components and from components to higher–level indices. In each section we also (4) highlight the top– and bottom–five countries on each principle of democracy in early (1812 or 1912) and late (2012) years of our sample period, as well as the aggregate trend over the whole time period 1789–2017 (where applicable). Finally, we (5) look at how the different principles are intercorrelated in order to assess the trade–offs involved between the conceptual parsimony achieved by aggregating to a few general concepts and the retention of useful variation permitted by aggregating less.
Four characteristics of V-Dem data present distinct opportunities and challenges for explanatory analysis: (1) the large number of democracy indicators (i.e., variables), (2) the measurement of concepts by multiple coders filtered through the V-Dem measurement model, (3) the large number of years in the data set, and 4) the ex ante potential for dependence across countries (generically referred to as spatial dependence). This chapter discusses 3 challenges and 10 opportunities that are implied by these characteristics. At the end of this chapter, we also discuss three assumptions that are implicit in most analyses of observational indicators of macro-features at the national level, which aim to draw conclusions about causal relationships.
Users of V–Dem data should take care to understand how the data are generated because the data collection strategies have consequences for the validity, reliability, and proper interpretation of the values. Chapters 4 and 5 explain how we process the data after collecting the raw scores and how we aggregate the most specific indicators into more general indices. In this chapter we explain where the raw scores come from. We distinguish among the different types of data that V–Dem reports and describe the processes that produce each type and the infrastructure required to execute these processes.
V-Dem relies on country experts who code a host of ordinal variables, providing subjective ratings of latent – that is, not directly observable – regime characteristics. Sets of around five experts rate each case, and each rater works independently. Our statistical tools model patterns of disagreement between experts, who may offer divergent ratings because of differences of opinion, variation in scale conceptualization, or mistakes. These tools allow us to aggregate ratings into point estimates of latent concepts and quantify our uncertainty around these estimates. This chapter describes item response theory models that can account and adjust for differential item functioning (i.e., differences in how experts apply ordinal scales to cases) and variation in rater reliability (i.e., random error). We also discuss key challenges specific to applying item response theory to expert–coded cross-national panel data, explain how we address them, highlight potential problems with our current framework, and describe long-term plans for improving our models and estimates. Finally, we provide an overview of the end–user–accessible products of the V-Dem measurement model.