We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
During the past decade, analyses drawing on several democracy measures have shown a global trend of democratic retrenchment. While these democracy measures use radically different methodologies, most partially or fully rely on subjective judgments to produce estimates of the level of democracy within states. Such projects continuously grapple with balancing conceptual coverage with the potential for bias (Munck and Verkuilen 2002; Przeworski et al. 2000). Little and Meng (L&M) (2023) reintroduce this debate, arguing that “objective” measures of democracy show little evidence of recent global democratic backsliding.1 By extension, they posit that time-varying expert bias drives the appearance of democratic retrenchment in measures that incorporate expert judgments. In this article, we engage with (1) broader debates on democracy measurement and democratic backsliding, and (2) L&M’s specific data and conclusions.
Being popular makes it easier for dictators to govern. A growing body of scholarship therefore focuses on the factors that influence authoritarian popularity. However, it is possible that the perception of popularity itself affects incumbent approval in autocracies. We use framing experiments embedded in four surveys in Russia to examine this phenomenon. These experiments reveal that manipulating information—and thereby perceptions—about Russian President Vladimir Putin’s popularity can significantly affect respondents’ support for him. Additional analyses, which rely on a novel combination of framing and list experiments, indicate that these changes in support are not due to preference falsification, but are in fact genuine. This study has implications for research on support for authoritarian leaders and defection cascades in nondemocratic regimes.
Scholars often use language to proxy ethnic identity in studies of conflict and separatism. This conflation of language and ethnicity is misleading: language can cut across ethnic divides and itself has a strong link to identity and social mobility. Language can therefore influence political preferences independently of ethnicity. Results from an original survey of two post-Soviet regions support these claims. Statistical analyses demonstrate that individuals fluent in a peripheral lingua franca are more likely to support separatism than those who are not, while individuals fluent in the language of the central state are less likely to support separatist outcomes. Moreover, linguistic fluency shows a stronger relationship with support for separatism than ethnic identification. These results provide strong evidence that scholars should disaggregate language and ethnic identity in their analyses: language can be more salient for political preferences than ethnicity, and the most salient languages may not even be ethnic.
Models for converting expert-coded data to estimates of latent concepts assume different data-generating processes (DGPs). In this paper, we simulate ecologically valid data according to different assumptions, and examine the degree to which common methods for aggregating expert-coded data (1) recover true values and (2) construct appropriate coverage intervals. We find that the mean and both hierarchical Aldrich–McKelvey (A–M) scaling and hierarchical item-response theory (IRT) models perform similarly when expert error is low; the hierarchical latent variable models (A-M and IRT) outperform the mean when expert error is high. Hierarchical A–M and IRT models generally perform similarly, although IRT models are often more likely to include true values within their coverage intervals. The median and non-hierarchical latent variable models perform poorly under most assumed DGPs.
Accountability—constraints on a government’s use of political power—is one of the cornerstones of good governance. However, conceptual stretching and a lack of reliable measures have limited cross-national research on this concept. To address this research gap, we use V-Dem data and innovative Bayesian methods to develop new indices of accountability and its subtypes: the extent to which governments are accountable to citizens (vertical accountability), other state institutions (horizontal accountability), and the media and civil society (diagonal accountability). In this article, we describe the conceptual and empirical framework underlying these indices and demonstrate their content, convergent, and construct validity. The resulting indices have unprecedented coverage (1900–present) and offer researchers and policymakers new opportunities to investigate the causes and consequences of accountability and its disaggregated subtypes. Furthermore, the methodology provides a framework for theoretically driven index construction to scholars working with cross-national panel data.
The government of the Chuvash Republic, an ethno-federal region of the Russian Federation, used a targeted and symbolic language policy in an attempt to stabilize the position of the republic's titular language while avoiding conflict with local Russophones and the Russian federal government. The resulting policy allowed the republic's government to frame the existence of an autonomous Chuvash republic – as well as the local elite's form of governance – as being essential to the preservation of the Chuvash language and thus the Chuvash people. In this way, it used language politics to strengthen its position vis-à-vis both local constituents and the Russian federal government. However, the limited nature of the government's program has made its gains tenuous in the face of continuing Russian political and cultural recentralization.
Data sets quantifying phenomena of social-scientific interest often use multiple experts to code latent concepts. While it remains standard practice to report the average score across experts, experts likely vary in both their expertise and their interpretation of question scales. As a result, the mean may be an inaccurate statistic. Item-response theory (IRT) models provide an intuitive method for taking these forms of expert disagreement into account when aggregating ordinal ratings produced by experts, but they have rarely been applied to cross-national expert-coded panel data. We investigate the utility of IRT models for aggregating expert-coded data by comparing the performance of various IRT models to the standard practice of reporting average expert codes, using both data from the V-Dem data set and ecologically motivated simulated data. We find that IRT approaches outperform simple averages when experts vary in reliability and exhibit differential item functioning (DIF). IRT models are also generally robust even in the absence of simulated DIF or varying expert reliability. Our findings suggest that producers of cross-national data sets should adopt IRT techniques to aggregate expert-coded data measuring latent concepts.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.