To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter presents a set of practical, classroom-tested exercises for teaching concept analysis, emphasizing how deliberate engagement with concepts improves research and communication. It outlines several strategies, including reconceptualizing familiar terms by identifying defining and elective attributes, and situating them within semantic fields. It highlights the heuristic power of Collier’s question, “What is that a case of?”, which prompts students to move from empirical examples to abstract categories. Taxonomy construction is another key tool, helping students systematize ideas across domains – from constitutions to cuisine – and understand how classification affects knowledge. Binary sorting (“There are two kinds of people…”) and genre-mapping (“What do you work on?”) also serve to stimulate reflection on research categories. The chapter argues for the pedagogical value of testing, suggesting that students benefit from identifying, defining, and illustrating core concepts as a way to internalize intellectual terrain. Field exams, concept glossaries, and vocabulary tests help solidify these connections. The chapter concludes with a case for “conceptualism” as a core scholarly orientation: Concepts allow generalization while grounding knowledge in empirical cases. Working with concepts is cognitively satisfying and essential for memory, communication, and cumulative learning – what more could a good course (or concept) hope to achieve?
David Collier and Steven Levitsky’s original “Democracy with Adjectives” article (1997) offered a useful corrective to Sartorian thinking about conceptualization, and the new revised version further clarifies its contribution. There is a crucial difference between identifying a diminished subtype (moving from democracy as attributes “A and B and C” to “A and B and not C”) and moving up the ladder of abstraction (from democracy as attributes “A and B and C” to “A and B and either C or not C”). Diminished subtypes are more precise and do not necessarily increase the extension of a concept. This chapter offers a critique of both examples of conceptual innovation, which are grounded in categorical thinking, from the conceptual approach used by the Varieties of Democracy (V-Dem) project, which conceives of democracy as an aggregate of multiple continuous dimensions. Examples using V-Dem data suggest that it is possible to create measurements of specific concepts that are both qualitatively rich and quantitatively precise. However, the measurement of very general concepts such as democracy comes at the cost of some quantitative information and conceptual clarity.
This chapter concludes the volume by reflecting on the ongoing value of concept analysis in the social sciences. It revisits the tension between hyperfactualism – obsessive attention to granular detail – and the necessary abstraction that enables generalization. Conceptualization, the authors argue, helps scholars not only communicate more clearly but also observe and describe phenomena more effectively. Far from being a distraction, conceptual work sharpens empirical inquiry. The chapter highlights the interplay between conceptualization and measurement, especially in validity assessment, and underscores how concepts represent and structure knowledge. Attention to concepts also facilitates integration and translation across time, space, and disciplines, as seen in such examples as the V-Dem project. Issues of conceptual boundedness, typologies, and traveling are revisited, drawing on contributions from cognitive linguistics and classic debates between lumpers and splitters. The authors also reflect on how digital tools and formal modeling offer new avenues for concept innovation. Finally, they affirm the importance of teaching concept analysis as a way to clarify students’ thinking, research design, and disciplinary communication. In sum, the chapter defends the overconscious scholar: one who sees in concepts not distraction, but a path toward cumulative, communicable, and intellectually satisfying scholarship.
This chapter examines four examples of concept formation in formal modeling and draws lessons for scholars outside formal theory. Because defining concepts is often more straightforward than solving models, formalization offers valuable tools for clarifying ideas. First, it highlights how formal definitions of conceptual primitives precede and shape model construction. For example, in economics, elasticity is formalized with intuitive ideal-type values, while in political science, audience costs – defined as part of a payoff structure – have endured owing to their clarity and portability. After solving models, scholars can aggregate and disaggregate equilibrium outcomes to build typologies. The chapter illustrates this with two further concepts: state-sponsored protection (refined through a model of cartel–police interactions), and commitment problems (which benefit from formal aggregation of diverse mechanisms). These cases demonstrate how formal tools, like natural-language concept analysis, help structure concepts – offering transparency, precision, and manipulability, though sometimes at the cost of nuance. For nonformal scholars, the chapter encourages “conceptual accounting”: experimenting with mathematical representations to clarify meanings and omissions. Ultimately, the formalization of concepts is not only compatible with broader conceptual analysis – it can enrich it. A glossary is included to assist readers less familiar with formal theory.
Based on an integrative review of research on European post-socialist civil societies over the past three decades, we critically examine: (1) how civil society is conceptualized and from whose position; (2) the methodologies employed in this knowledge production, including any reflections on the usefulness and conditions of such knowledge; and (3) existing knowledge gaps and areas where further development is needed within this body of literature. We distinguish between three theoretical approaches to European post-socialist civil societies, the Western-centric, critical, and triple-embedded approaches, based on their embeddedness in the field, closeness to the research subjects, and aspirations to include them in theory-building. We argue that a shift from a structural perspective on civil societies in the region has taken place into a perspective focusing on agency and developments on the ground in the past decade.
This chapter explores the Brahmanical equivalents of the notion of concept in Western philosophy, and presents two major approaches to the formation of concepts: the imagistic theories of forms and the grammar-based theories of meaning. The first part of the chapter explores the imagistic approach, which appears in the Vedic model of the divine origin of forms. This model inspired the representationalism of the Sāṃkhya philosophical system, according to which external objects are reflected in the mind in the form of impressions (pratyayas), the properties of which are recognized in a general (sāmānya) form. The second parts discusses the grammar-based theories of meaning in the context of debates between the Grammarian, the Nyāya-Vaiśeṣika, and the Mīmāṃsā schools. The realism about universals (jāti) in Nyāya and its theory of the role of ‘conceptual construction’ (vikalpa) in perception is discussed in some detail.
The transition from research question to theory is a crucial part of producing a good empirical research paper. A good theory explains patterns in data with a well-articulated “because” clause that specifies a causal mechanism linking the independent variable to the dependent variable. A good theory also identifies the scope conditions and assumptions under which it operates. Developing your theory, articulating definitions of its concepts, and fully explicating its causal mechanism are key components of this process; these are critical for later stages. This is part of why the theory is such an important part of empirical research: without a carefully-thought-out theory, empirical research doesn’t make much sense.
The debate on regime change has experienced a U-turn. Attention has shifted from the regime transitions occurred during the so-called third wave of democratization to the signals of an incipient reverse trend. However, the actual import and urgency of the problem remain unclear, due to a growing confusion concerning what a process opposite to democratization is, how many distinct forms it can take, and consequently what the empirical referents of the phenomenon are. Building on the notion of “autocratization”, or regime change towards autocracy, the paper elaborates a framework for the comparative analysis of regime changes opposite to democratization. Specifically, we identify political participation, public contestation and executive limitation as the main dimensions of regime variance, define autocratization accordingly, illustrate and systematize the different regime transitions that fall under this label, and clarify what autocratization is not. The proposed conceptual and analytical framework could support future research on comparative autocratization.
The idea of a “third sector” beyond the arenas of the state and the market is probably one of the most perplexing concepts in modern political and social discourse, encompassing as it does a tremendous diversity of institutions and behaviors that only relatively recently have been perceived in public or scholarly discourse as a distinct sector, and even then with grave misgivings. Initial work on this concept focused on what is still widely regarded as its institutional core, the vast array of private, nonprofit institutions (NPIs), and the volunteer as well as paid workers they mobilize and engage. These institutions share a crucial characteristic that makes it feasible to differentiate from for-profit enterprises: the fact that they are prohibited from distributing any surplus they generate to their investors, directors, or stakeholders and therefore presumptively serve some broader public interest. Many European scholars have considered this conceptualization too narrow; however, arguing that cooperatives, mutual societies, and, in recent years, “social enterprises” as well as social norms should also be included. However, this broader concept has remained under-conceptualized in reliable operational terms. This article corrects this short-coming and presents a consensus operational re-conceptualization of the third sector fashioned by a group of scholars working under the umbrella of the European Union’s Third Sector Impact Project. This re-conceptualization goes well beyond the widely recognized definition of NPIs included in the UN Handbook on Nonprofit Institutions in the System of National Accounts by embracing as well some, but not all, of these additional institutions and forms of direct individual activity, and does so in a way that meets demanding criteria of comparability, operationalizability, and potential for integration into official statistical systems.
Nonprofit organizations (NPOs) have a moral obligation to act in the public interest and are thus expected to lead by example in transparency. Yet, by focusing on the benefits that NPOs derive from being transparent (i.e., the instrumental approach), prior literature has tended to neglect the view of transparency as a moral responsibility regardless of the associated benefits (i.e., the normative approach). We suggest that embracing a normative approach may enable a more comprehensive conceptualization and operationalization of NPO transparency. Based on a systematic literature review, this study conceptualizes NPO transparency as a three-dimensional construct, comprising completeness, accessibility, and inferability. Such conceptualization is then translated into an extensive transparency index that NPOs, with sufficient staff and resources, can use to monitor their practices. Finally, we assess the appropriateness of the conceptualization and the index through a content analysis of 432 NPO websites in Belgium. By highlighting inferability as an important but understudied dimension, we contribute to a more comprehensive normative understanding of NPO transparency.
An immediacy condition is elaborated. It is argued that a Kantian intuition is immediate both in the sense that it is direct in its way of representing and in the sense that it is presentational in acquainting the perceiver with its object. It can thus be said to be both of and as of, in present-day terminology. It is representation of, or referential, by virtue of its “intuitive marks,” which are singular, and it is representation as of, or attributive, by virtue of its “predicates of intuition,” which are general. This has been overlooked by most Kant commentators. It may also have contributed to the epistemic downgrading of perceptions that has been ascribed to Kant. It is argued that an empirical intuition hooks on to tropes in the perceived scene by way of the intuitive marks, and that there is also perceptual attribution of proper and common sensibles to the concrete particulars by means of it. Finally, it is discussed how information that is carried by intuitions can be “unpacked” and thought through concepts in cognition of objects “in the proper sense.”
Chapter 2 outlines the book’s conceptual and theoretical frameworks. It bridges studies of regulatory styles with scholarship on legal and political opportunity structures to detail indicators of legalistic governance, which serve as a guide for the subsequent paired case studies. It also theorizes five causal mechanisms that elucidate how activism contributes to more legalistic governance. Finally, it discusses, in probabilistic terms, the conditions under which activism is more likely to contribute to legalistic modes of governance.
The introduction outlines the book’s two main puzzles: First, why is legalistic governance emerging in South Korea and Japan, which were long known for their non legalistic regulatory styles? Second, what accounts for the varieties of legalism observed in Korea and Japan? Legalism describes a style of regulation that relies on more formal, detailed, and enforceable laws and regulations, as well as more participatory policy design and implementation processes. This book argues that activists and lawyers are often-overlooked societal drivers behind the emergence of legalism and the broader judicialization of politics in Korea and Japan.
Motivational Interviewing (MI) has demonstrated significant effects in diverse areas of practice, with over 2,000 controlled clinical trials published. Some criticisms of MI have emerged along the way.
Aims:
We examine theoretical and methodological critiques of MI.
Method:
We discuss three significant theoretical and methodological criticisms of MI: (1) that MI lacks conceptual stability; (2) that MI lacks a theoretical foundation; and (3) that MI is just common factors in psychotherapy.
Results:
It is true that definitions and descriptions of MI have evolved over the years. Mastery of MI clearly varies across providers, and when the quality of an intervention is unmeasured, it is unclear what has been trained or delivered. Reliable and valid tools to assess MI fidelity are available but often unused in outcome studies. It remains unclear what levels of proficiency are necessary to improve client outcomes. Some attempts to minimize variability in the delivery of MI appear to have reduced its effectiveness. In respect of the second critique is that MI lacks a theoretical foundation. It is unclear whether and how this is a disadvantage in research and practice. Various theories have been proposed and specific causal chain predictions have been tested. A third critique is that MI is merely common factors found among psychotherapists. The contribution of such relational skills is testable. There are specific aspects of MI related to client language that influence client outcomes above and beyond its relational components.
Conclusions:
The critiques reflect important factors to consider when delivering, training, and evaluating MI research.
The second chapter identifies and conceptualizes political atomization. Political atomization explains two outcomes better than existing literature: why incremental expansions in social policy can entrench inequality and how authoritarian states sometimes use public service provision as a tool of social control. It also accounts for how policies said to expand workers’ rights end up undermining their claims to benefits owed to them in China. Alternative explanations are inadequate, and the research design, methods, and sources of the book offer different insights. The theory of political atomization is situated within the literatures on authoritarianism, immigration, and welfare states and elucidates in detail how the process works and why it persists. There are trade-offs and risks to this approach, but embedded inequality ultimately serves the state. Unpacking political atomization illuminates how everyday marginalization of people works on the ground in their lived experiences.
Chapter 2 presents the bounded accountability theory of incumbency bias and its main empirical predictions and outlines the core empirical strategy for testing the theory across the country–office cases. After offering a conceptualization and typology of incumbency bias, the chapter explains how the nature of the information environment encourages retrospective voting and leads to the emergence of incumbency bias. Based on this general mechanism, the chapter predicts that the alignment of policy scope and fiscal institutions explains why some democracies exhibit incumbency advantage while others display an incumbency disadvantage, and demonstrates how exogenous shocks may lead to within-country changes in incumbency bias. The chapter also derives predictions about why there are differences between personal and party incumbency bias. It concludes by developing a novel estimation framework that extends the close-election regression discontinuity design to measure incumbency bias in different political systems and document variation in direction and type within them.
This chapter defines the most important basic notions and concepts figuring in the analysis of the system of qualifications of states of affairs and their expression in language, as they are used in the book. It defines the semantic categories figuring in the domain of qualifications of states of affairs (often called TAM categories), often in a preliminary fashion as some of them are elaborated or modified in later chapters. It introduces the notion of a hierarchy present in the system of qualificational dimensions, accounting for their semantic scope properties – an analytical concept central in the book. It moreover discusses the notion of a ‘semantic paradigm,’ as a system of expressive devices for a single qualificational category in which each member has slightly different functional characteristics, and it motivates the correlated function-to-form approach adopted in the study for analyzing qualificational categories and their expressions. The chapter moreover does some groundwork for the theoretical issue of the position of the hierarchical system in cognition, arguing that it is conceptual, not linguistic, in nature.
This chapter returns to the theoretical concerns of the study, and to the principles at the heart of a cognitive-functional approach to modeling the cognitive processes in language use. Central are the basic principles of depth and dynamism, and the three issues emerging from them when comparing cognitive and traditional functionalist approaches in current linguistics: the (non)concern with conceptualization in linguistic analysis, the processual vs. representationalist concept of grammar, and the complex meaning-form relationship. The chapter rounds up and reflects on what the analyses of the attitudinal and other semantic and functional dimensions in the preceding chapters have shown with relevance to these principles and issues. Moreover, it uses these insights to dwell on wider implications, beyond the analysis of the qualificational dimensions, for our understanding of the cognitive systems involved in language use.
Modality – the ways in which language can express grades of reality or truth – is the subject of a vast and long-established body of research. In this book, field-leader Jan Nuyts brings together twenty years of his research to offer a comprehensive, fully integrated view on areas of contentious debate within modality, from a functional and cognitive perspective. The book provides an empirically grounded, conceptual reanalysis of modality and related categories including evidentiality, volition, intention, directivity, subjectivity and mirativity. It argues for the dissolution of the category of modality and for an alternative division of the wider field of semantic notions at stake. The analysis also reflects on how to model the language faculty, and on the issue of language and thought. It is essential reading for researchers interested in the semantics of modality and in the implications of this domain for understanding the cognitive infrastructure for language and thought.
Theory is the essential foundation on which an empirical network study is built. A network theory stipulates a certain, carefully defined network and offers a reason why it relates to other variables. Pinning down what the precise network of theoretical interest is and fleshing out a reason why it matters is what makes up the key preliminary work in empirical networks research design. It can be tempting to rush through this preliminary step, especially when data are readily available. Note that doing so comes with risks. Design blunders are more debilitating in networks research than in other data collection endeavors. Thinking through all aspects of a theoretical setup takes time, but is part of the real work of research design. Taking the time early is an investment in avoiding wasted effort later. This chapter presents a framework to help construct a theory that is maximally useful for guiding empirical research design.