To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 3 focuses on how to think when evaluating experiments. This includes a discussion of realism, particularly why mundane realism or resemblance to the “real world” receives far too much attention, as well as an overview of how to design experimental treatments. The chapter then turns to validity issues, offering a new way to think about external validity in assessing experiments. This includes a detailed discussion of sampling and why the onus should be more on critics of an experimental sample than on the experimentalist him/herself (i.e., to justify a sample).
This book seeks to narrow two gaps: first, between the widespread use of case studies and their frequently 'loose' methodological moorings; and second, between the scholarly community advancing methodological frontiers in case study research and the users of case studies in development policy and practice. It draws on the contributors' collective experience at this nexus, but the underlying issues are more broadly relevant to case study researchers and practitioners in all fields. How does one prepare a rigorous case study? When can causal inferences reasonably be drawn from a single case? When and how can policy-makers reasonably presume that a demonstrably successful intervention in one context might generate similarly impressive outcomes elsewhere, or if massively 'scaled up'? No matter their different starting points – disciplinary base, epistemological orientation, sectoral specialization, or practical concerns – readers will find issues of significance for their own field, and others across the social sciences. This title is also available Open Access.
How can researchers obtain reliable responses on sensitive issues in dangerous settings? This Element elucidates ways for researchers to use unobtrusive experimental methods to elicit answers to risky, taboo, and threatening questions in dangerous social environments. The methods discussed in this Element help social scientists to encourage respondents to express their true preferences and to reduce bias, while protecting them, local survey organizations, and researchers. The Element is grounded in an original study of civilian support for the jihadi insurgency in the Russian North Caucasus in Dagestan that assesses theories about wartime attitudes toward militant groups. We argue that sticky identities, security threats, and economic dependence curb the ability of civilians to switch loyalties.
In Chapter 10, I discuss the moral context of research interpretation and reporting. I describe interpretation as the constitution of evidence within an epistemic frame characterized by the totality of (always at least partly moral) commitments underlying analytic choices. These analytic choices include those concerning what is worthy of study, what kinds of methods and forms of evidence are considered acceptable, and what kinds of claims are warrantable. I also emphasize the ways that evidence is not merely gathered nor reported, but constituted within a rhetorical and political context. In the latter half of the chapter, I discuss the moral affordances of research reporting, focusing on questions of fairness, honesty, representation, and other considerations involved in report authoring. I focus specifically on questions of: collaboration and credit; style and representation; venue, availability, and audience; submission, editorial, and revision; and the dissemination and use of research reports.
In Chapter 4, I draw on social science, historical, and philosophical studies of science to describe the everyday activities involved in science. I characterize the core scientific task as the production of accounts which legitimate, organize and mobilize scientific labor. These accounts are shaped by a complex network of social constraints and processes. Through the main body of the chapter, I describe some of these constraints and processes, including those cultural and political (e.g., national funding priorities), professional and disciplinary (e.g., disciplinary norms concerning methods, equipment, writing conventions, etc.), institutional (e.g., managing the requirements of bureaucracies and budgets), local and interpersonal (e.g., lab politics and professional rivalries), and dispositional and personal (e.g., personal talents and capacities for science work). I argue that these various social constraints and processes constitute the moral geography of science and that to navigate them well and responsibly is the substance of good science.
In Chapter 3, I discuss scientific instrumentalism, or the notion that scientific findings are morally neutral and that scientific activities are justified primarily in terms of their pragmatic utility. I argue that an instrumentalist approach to psychology disguises the moral and political agendas of those who deploy psychological research, conflating these with a neutralist account of “what works.” I provide a broad historical sketch of those for whom psychology has worked – primarily, large institutions – and of those for whom psychology has not worked – principally, those in disenfranchised social positions. I detail some of the most egregious examples of harm, exploitation, and injustice in the history of psychology, providing a general analysis of the ways that psychologists have encoded racism, sexism, and other forms of prejudice under seemingly neutral categories like intelligence. Concluding Part I, I outline how scientism, objectivism, and instrumentalism combine to undermine the moral responsiveness of psychology.
I don’t necessarily want to begin this book with an involved critique of scientism or objectivism. The job has been done and more than once.1 Yet here I am, marshalling my wits and courage. I feel the need to pick the same old fights mostly because psychologists (and other scientists) are, from our earliest education, inoculated with a set of immunities against a moral understanding of our work. That moral understanding is, I think, quite straightforward; it is easy to see how and why we have particular responsibilities to particular persons and communities at particular moments in scientific practice. But these particularities of moral responsibility are dismantled and obscured by a scientistic education. So much so that what should be obvious and second nature – namely, that science is a community, and that good science means personal integrity and principled commitment, moral responsibility to, and care for, particular persons, and good citizenship – is, at best, an invisible background assumption and, at worst, a simply alien way of thinking about science.
In Chapter 11, I summarize the book as a whole, arguing that the dilemmas, decisions, relational and institutional commitments, and other moral considerations described here are the essence of good science. I also argue that, because a social and moral account of science does not ignore or hide the human and moral contexts of research, it subjects scientific claims to a more rigorous scrutiny than does an objectivist account and so strengthens the warrant for those claims. Finally, I discuss how the account offered in this book impacts everyday psychological practice, acknowledging the inevitably local and contextual ways that a moral accounting of psychology might be realized within specific research communities. This caveat notwithstanding, I suggest that anyone could begin by asking, in their own communities, the kinds of questions posed here as well as by participating in an epistemic activism aimed at transforming disciplinary structures and practices.
In Chapter 5, I extend the social and moral account of scientific work with a similarly social and moral account of scientific justification. My argument is that scientific justification should consist, not simply in the construction of evidentiary rationales, but in the refinement of the whole moral architecture of science. I insist that the “core competency” in training and oversight for psychological inquiry should be the justification – that is, the making just, right, and true – of research practices.Drawing on the work of Emmanuel Levinas and Helen Longino (among others), I argue that two forms of practice essential to such justification are an open disciplinary politics, or an institutionalized openness to uncertainty, critique, and correction by the widest possible range of qualified contributors, and a committed research praxis, or an approach to research where everyday scientific practices are interrogated and refined to become consistent with explicit values.
In Chapter 8, I discuss some of the most salient moral questions, dilemmas, and duties involved in choosing a research community (and conversation) in which to participate. Conducting research in some area, I argue, is not so much a solitary domain choice as a process of becoming socialized to a particular community and to its values, languages, traditions, institutions, and ways of thinking, writing, and working. In some measure, this also means becoming responsible for those traditions and values. I also discuss the ways that participating in a research community involves building the relationships of trust and good faith upon which all science rests, a relational process requiring honesty, the nurturing of cooperative relationships, and other duties. I also emphasize the political, institutional, and economic forces that structure research communities and the necessity of active epistemic citizenship to transform these in ways that serve the collective values of those communities.
I think it uncontroversial to claim that, like everyone else, the scientist navigates a world of moral choices, human relationships, and political systems that shape and constrain the kinds of work it is possible to do. The scientist must make difficult, inherently ambiguous choices about who and what matters (e.g., is worthy of study), where to make compromises and accept constraints, what counts as evidence, and so on. The scientist must also attend to an often vast network of relationships – with supervisors, administrative staff, collaborators, research participants, journal editors, policy makers, and so on – through which her work is fashioned and disseminated. And that work is bound up in a range of political systems she must navigate, including institutions that hold legal or financial interest, those that provide auxiliary support and oversight, those that administer local resources and responsibilities, and those that arbitrate disciplinary influence and prestige.