Hostname: page-component-7c8c6479df-8mjnm Total loading time: 0 Render date: 2024-03-18T11:12:10.975Z Has data issue: false hasContentIssue false

The Qualitative Transparency Deliberations: Insights and Implications

Published online by Cambridge University Press:  06 January 2021

Get access
Rights & Permissions [Opens in a new window]

Abstract

In recent years, a variety of efforts have been made in political science to enable, encourage, or require scholars to be more open and explicit about the bases of their empirical claims and, in turn, make those claims more readily evaluable by others. While qualitative scholars have long taken an interest in making their research open, reflexive, and systematic, the recent push for overarching transparency norms and requirements has provoked serious concern within qualitative research communities and raised fundamental questions about the meaning, value, costs, and intellectual relevance of transparency for qualitative inquiry. In this Perspectives Reflection, we crystallize the central findings of a three-year deliberative process—the Qualitative Transparency Deliberations (QTD)—involving hundreds of political scientists in a broad discussion of these issues. Following an overview of the process and the key insights that emerged, we present summaries of the QTD Working Groups’ final reports. Drawing on a series of public, online conversations that unfolded at www.qualtd.net, the reports unpack transparency’s promise, practicalities, risks, and limitations in relation to different qualitative methodologies, forms of evidence, and research contexts. Taken as a whole, these reports—the full versions of which can be found in the Supplementary Materials—offer practical guidance to scholars designing and implementing qualitative research, and to editors, reviewers, and funders seeking to develop criteria of evaluation that are appropriate—as understood by relevant research communities—to the forms of inquiry being assessed. We dedicate this Reflection to the memory of our coauthor and QTD working group leader Kendra Koivu.1

Type
Reflection
Copyright
© American Political Science Association 2020

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Transparency in Qualitative Research: An Overview of Key Findings and Recommendations

Over the last quarter century, a variety of intellectual and institutional efforts have been made in political science—often in parallel with developments in many of the natural and social sciences—to enable, encourage, or require scholars to be more open and explicit about the bases of their empirical claims and, in turn, to make those claims more readily evaluable by others. Important developments include Gary King’s essay (Reference King1995) on replicability as an evaluative standard; the expansion of archiving infrastructure for both quantitative and qualitative data; the adoption of data-management, ‑archiving and replication policies by journals, publishers, and funders; technological innovations that have made it easier to embed annotations and primary-source links in published output; and the Data Access and Research Transparency (DA-RT) initiative (see, esp. Lupia and Elman Reference Lupia and Elman2014) and the associated Journal Editors’ Transparency Statement (JETS), which, as of March 2019, had been signed by twenty-seven political science journals.Footnote 2

Political scientists who develop and use qualitative methods have long taken an interest in—and put forth a broad range of innovative strategies for—making research open, reflexive, and analytically systematic (e.g., Van Evera Reference Van Evera1997; George and Bennett Reference George and Bennett2005; Wedeen Reference Wedeen2010; Brady and Collier Reference Brady and Collier2010; Fujii Reference Fujii2012; Schatz Reference Schatz2013; Mosley Reference Mosley2013). However, the recent push for overarching transparency norms and requirements in the discipline—while relatively uncontroversial among quantitative scholars—has provoked serious concern among qualitative scholars. A 2015 symposium in Qualitative & Multi-Method Research (Büthe and Jacobs Reference Büthe and Jacobs2015a) featured, alongside arguments about the benefits of enhanced research explicitness, a number of essays highlighting the ethical risks and intellectual limits of transparency requirements and, especially, of data-sharing rules for some forms of social inquiry as well as potential chilling effects of such requirements for certain kinds of qualitative research.Footnote 3

A public letter signed by over 1,100 political scientists in the fall of 2015 called for a delay to the implementation of the JETS to allow time for consultation and deliberation over aspects of the requirements that might have deleterious effects on qualitative research and its publication and might impinge on researchers’ obligations to protect human subjects.Footnote 4 These concerns arose, moreover, against a broader disciplinary backdrop in which qualitative research traditions appeared to many to be losing ground to quantitative methods on a number of fronts, including in the pages of leading journals. Further discussion of transparency’s promise and perils for qualitative inquiry unfolded on the website dialogueondart.org, in the pages of the Comparative Politics Newsletter,Footnote 5 and in a number of journal articles (e.g., Schwartz-Shea and Yanow Reference Schwartz-Shea and Yanow2016; Monroe Reference Monroe2018; Tripp Reference Tripp2018).

The Qualitative Transparency Deliberations (QTD)—sponsored by the APSA’s organized section for Qualitative and Multi-Method Research—emerged in this context of accelerated rule-making on, and intensifying debate about, data sharing and other forms of openness in political science research. The QTD was established as a venue within which qualitative scholars could deliberate the role, contribution, costs, and limitations of transparency in qualitative research. The QTD aimed to create discursive space for qualitative research communities in the discipline to work through and articulate understandings of and expectations around research transparency on their own methodological and substantive terms, while illuminating areas of shared and divergent understanding across the discipline. Amidst a debate often focused on large questions of principle, the process was also designed to draw attention to concrete research practices that qualitative scholars can and do employ to generate clear, evaluable, and rigorous research.

Several hundred political scientists took part in the deliberations, discussing questions such as: When and why is it beneficial for scholars to provide a detailed account of the methods by which they gathered and analyzed their evidence, and what are effective ways of providing this information? Under what conditions and how should scholars consider sharing “raw” qualitative data, such as interview transcripts, and what benefits might arise from doing so? What costs and practical constraints may limit scholars’ ability to share their research materials? How should editors and reviewers for the globally dominant Anglo-American journals and presses, when articulating transparency expectations, take into account that political science scholarship is carried out in diverse political contexts and by scholars with highly unequal social and economic resources? What are the implications for transparency of researchers’ ethical obligations toward human participants? Why and when might scholars have ethical imperatives not to share the unprocessed data underlying their claims, or even details of their empirical methods? What about transparency toward those who participate in our research? How well or poorly does the very concept of “transparency,” with associated philosophical presumptions, fit with the epistemological and ontological premises on which different forms of qualitative research rest?

Policy issues, including the question of what kinds of transparency rules (if any) journal editors should adopt for qualitative researchers, constituted a key concern of many who took part in the deliberations.Footnote 6 At the same time, the QTD was not set up as a debate over DA-RT/JETS or any other specific instantiation of transparency norms. Rather, the process was created to give qualitative scholars an opportunity to openly deliberate about the meaning of transparency; the benefits, costs, and risks attending its pursuit; the means of achieving it; and the limits of its usefulness.

The deliberations sought to create space for the emergence of differentiated, multi-dimensional understandings of these issues. Early critiques of the transparency movement within our profession focused in part on the danger of imposing “one-size-fits-all” standards on widely differing forms of research. Some of the concerns were about the imposition of common rules on quantitative and qualitative scholarship. Yet the category of “qualitative research” itself encompasses a vast range of logics of knowledge-production, methodologies, forms of evidence, and research settings. The meaning, value, costs, and operationalization of research transparency—and even its coherence as a concept—are likely to depend heavily on the particular form of qualitative scholarship in question. Further, the umbrella notion of transparency encompasses a highly diverse set of principles and practices, with potentially widely varying implications. For instance, the intellectual logic and the practical and ethical challenges of sharing the “raw” data underlying a study are likely to be very different from those of being explicit about the details of the analytic process or of disclosing potential conflicts of interest.

The QTD was, accordingly, designed to allow various research communitiesFootnote 7 to arrive at different answers to the questions under discussion and to encourage a differentiated examination of transparency’s multiple dimensions. The QTD has thus, in part, been a process of articulating and explaining differences to one another. This has included vigorous yet constructive debate over the utility and coherence of the very notion of research “transparency” and related concepts, such as openness, explicitness, reflexivity, and research integrity.Footnote 8

This essay maps out the structure and outcomes of this wide-ranging, multi-year discussion. On some issues, the outcome has been a mutual understanding of where consensus or compromise cannot be reached—where intellectual pluralism implies sustained disagreement. At the same time, the process also brought to the fore a striking range of agreement about the kinds of information that scholars ought to provide about how they have arrived at their empirical claims. Agreements about common and best research-explicitness practices emerged mostly from within particular research communities. We highlight a number of these differentiated, community-specific understandings in this essay; they are discussed at greater length in the report summaries that follow this overview.

Reading across these reports and their central claims, moreover, we elaborate a core set of more general emergent findings of the QTD process. Among the most important are:

  • there exists no single “meta-standard” of research transparency that can operate coherently across all logics of qualitative inquiry;

  • sharing some source materials is seen as an intellectually valuable practice within many qualitative research traditions; however, uniform and maximalist data-sharing requirements would be highly problematic for ethical, practical, and epistemological reasons;

  • researchers’ ethical obligations to protect human participants and their communities ought to take priority over the sharing of information with research consumers.

In addition, we identify relatively broad consensus among qualitative research communities on the importance of detailed and explicit discussion, in the publication or presentation of research findings, of three general features of an empirical inquiry:

  • the process through which the evidence used in a study was generated;

  • the analytic process through which the scholar arrived at conclusions; and

  • the risks faced by human participants in a study and the steps taken to protect them and their communities.

This article elaborates these and other key insights emerging from the deliberative process. This overview of the process and its findings is followed by executive summaries of the reports issued by the working groups that led the deliberations. The full reports are available as Supplementary Material, itemized ahead of the References section.Footnote 9

We hope these reports will contribute to professional debates and practices in several ways, including by:

  • advancing scholarly understandings of the meaning of transparency in different forms of qualitative research, including its conceptual limits for some research traditions;

  • providing researchers with practical guidance on specific ways of being open or explicit about various elements of the research process, including about their potential benefits, costs, and risks;

  • informing graduate student training in research design and methods; and

  • informing policy- and decision-making by reviewers, editors, and funders seeking to develop and apply standards and criteria of evaluation that are appropriate—as understood by relevant research communities—to the logics of inquiry and forms of the research being assessed.

Moreover, in the interest of informing scholarly practice, most reports identify and discuss specific works of qualitative political science that showcase particular research and research communication strategies.

The remainder of this essay provides an overview of the deliberative process and its main findings. Following an account of the QTD’s origins and procedures, we discuss the meanings of transparency that emerged from the deliberations, unpacking the diverse forms of research explicitness that the QTD working groups conceptualized and examined, including some that have not featured prominently in previous disciplinary discussions. We then sketch the key benefits of transparency identified in the deliberations, including gains for the interpretability and assessment of research, for research processes, and for human participants. Next, we discuss important tradeoffs highlighted in the deliberations, outlining costs and risks that some openness practices or requirements might imply for participants, researchers, and political science scholarship more broadly, not least because the downside risks might be exacerbated by inequalities across scholars and institutions. This is followed by a consideration of a more fundamental critique of the transparency agenda, elaborated by some participants operating in interpretivist research traditions, as incompatible with the logics of knowledge-production on which much qualitative research rests. In the penultimate section, we draw together the implications of the deliberations for research practices, identifying key areas of consensus and disagreement across research communities, and drawing attention to a number of concrete transparency strategies highlighted or proposed in the reports. We close by discussing the implications of the QTD’s findings for the work of editors, reviewers, funding agencies, and professional bodies in the discipline.

The QTD ProcessFootnote 10

At its 2015 business meeting, against the backdrop of broader debates about transparency in the profession, the APSA’s Organized Section for Qualitative and Multi-Method Research (QMMR) unanimously passed a motion to initiate a process of deliberation over transparency in qualitative research. The motion tasked Tim Büthe and Alan Jacobs with drawing up and putting before the section membership a proposal for this deliberative process. In the winter of 2015–2016, the proposal went to an online vote of all QMMR section members, passing with 98% in favor on turnout of 303 out of 645 members.

QMMR section president Peter Hall then proceeded to appoint a ten-person Steering Committee that would include scholars engaging in a wide range of forms of qualitative research. The Steering Committee was composed of Andrew Bennett, Erik Bleich, Mary Hawkesworth, Kimberley S. Johnson, Kimberly Morgan, Sarah E. Parkinson, Edward Schatz, and Deborah Yashar, with Büthe and Jacobs serving as co-chairs.

The QTD Steering Committee organized a first, agenda-setting phase of deliberations, which unfolded online in the spring of 2016. During Stage I, scholars from across the profession were invited—via a wide range of online channels, including APSA section and other e-mail lists—to participate in an open-ended online consultation on the questions and concerns on which the deliberations should focus. Over 170 comments were received during this stage.Footnote 11 Among the vast number of issues and questions raised were the variety of possible meanings of “transparency” and forms it might take; the value of different transparency practices; the implications of data sharing for the safety of research participants; concerns about the fit between transparency requirements and the logic of particular qualitative methodologies or their underlying epistemological and ontological premises; and the scale and equity of the burdens that data-access rules might impose on qualitative scholars, and especially junior scholars.

Based on this initial input, the Steering Committee appointed a set of thirteen working groups to lead consultations and deliberations on different aspects of the topic. One common focus of many Stage 1 comments was the relationship between transparency and particular forms and settings of inquiry, suggesting that transparency’s practicalities, benefits, costs, and limitations are highly conditional on the kind of qualitative research in question. The Steering Committee thus organized the working-group mandates in a way that would allow for a differentiated consideration of transparency’s merits and mechanics for different types of scholarship.

The working groups were organized into four broad clusters, as are the reports and summaries. Cluster I consisted of three working groups focused on a set of fundamental issues that cut across particular research traditions:

  • the relationship between scholars’ understandings of transparency and the epistemological or ontological presumptions underlying their work (I.1);

  • the interface between openness and researchers’ ethical obligations to protect human subjects (I.2);

  • the institutional form that any pursuit of research explicitness might take—ranging from strictly voluntary individual practices to obligatory prescriptions (rules) with centralized enforcement—and how these interact with power and resource differentials in the profession (I.3).

Cluster II was structured to allow deliberation on how the meaning, value, and challenges of transparency might vary across forms of qualitative evidence. Group II.1 was tasked with considering text-based sources while group II.2 focused on evidence derived from researchers’ direct interactions with human research participants.

Cluster III unpacked the problem by analytic approach or methodology, with groups dedicated to considering process tracing and comparative methods (III.1), interpretive methodologies (III.2), ethnography (III.3), set-theoretic approaches, especially Qualitative Comparative Analysis, QCA (III.4), and manual content analysis (III.5).

Finally, in Cluster IV, working groups considered the complexities of pursuing transparency in particular research contexts—authoritarian or repressive political settings (IV.1) or settings of political violence (IV.2)—or for research with vulnerable or marginalized populations (IV.3).

The Steering Committee recruited for each working group three or four scholars who regularly engage in the kind of research, or have special expertise in the issue area, on which the group was to focus. In staffing the working groups, the Steering Committee also aimed to capture a range of approaches encompassed within each group’s mandate, broad regional expertise, and diversity in demographic backgrounds, career stages, and institutional affiliations (e.g., public/private, research-/teaching-oriented).

With the support of the National Science Foundation, the Steering Committee and all working groups met prior to the 2016 Annual Meeting of the APSA to discuss the overall mandate and objectives toward which the deliberations should be oriented. Drawing on this discussion, the Steering Committee suggested a set of common, core questions to guide the consultations, deliberations, and reports of the working groups in Clusters II, III, and IV—those focused on particular forms or settings of research. Specifically, each group was asked to consider, for the kind of evidence, analytic methodology, or research setting that they were tasked with examining:

  • the meaning of transparency as a concept (including the potential lack of a coherent meaning);

  • transparency’s intellectual, social, or ethical benefits;

  • the costs and risks of, or obstacles to, pursuing transparency; and

  • concrete practices through which scholars might either realize greater transparency or, without the use of transparency practices, generate research that is insightful, credible, and evaluable.

While the QTD’s original declared objective was to assess transparency’s promise and limits for different forms of qualitative research, early discussions among participants revealed concerns about putting this concept at the center of the process. Some participants viewed “transparency” as too closely tied to DA-RT’s specific operationalization of research openness, which presumed a particular mode of research while excluding important forms of information-sharing in which qualitative scholars might usefully engage. Other colleagues raised a more fundamental objection, arguing that transparency was inextricably linked to a particular, empiricist view of knowledge-production; they voiced a concern that a focus on “transparency” would thus privilege a narrow set of philosophical premises to the exclusion of others. Still others held that qualitative scholars should maintain a focus on “transparency” but adapt or expand its meaning in ways appropriate to qualitative research logics—thus laying claim to the intellectual “high ground” that the concept occupies, rather than ceding that ground to other research traditions.

The controversy surrounding the QTD’s focal concept presented the Steering Committee with a dilemma in setting the terms of discussion. Should we be talking about transparency? And if not, what should we be talking about? Among the possible options were (1) to maintain a clear focus on “transparency” but seek to expand its meaning to encompass the varied logics and norms of qualitative inquiry; (2) to replace the concept with an alternative such as “openness,” “explicitness,” or “research integrity”; or (3) to broaden the scope of deliberation to encompass both an expansive notion of transparency and related concepts. The Steering Committee opted for the third approach, asking working groups to consider transparency as one of a number of possible means of achieving broader end goals, including richer communication about knowledge-production, research integrity, and professional ethics. In framing the discussion in these terms, the committee also sought to make space for participants to critique or even reject transparency as an intellectual value and to elucidate alternative mechanisms for generating evaluable, interpretable claims and for advancing the ethical pursuit and cumulation of knowledge. In the end, the majority of groups chose to frame their findings in terms of “transparency,” regardless of the stance that they took on the issues under consideration. The “Ethnography and Participant Observation” and the “Interpretive Methods” working groups, as well as one of the “Epistemological and Ontological Priors” subgroups, on the other hand, chose to part ways with the terminology of transparency as a poor philosophical fit for the logics of inquiry that they were examining and to employ alternative concepts. Moreover, the working group on “Research Ethics and Human Subjects,” while acknowledging the value of “transparency” in some settings, advances a broad and distinct approach of “reflexive openness” that emphasizes sustained reflection on ethical research practices. At the same time, these reports also make clear that research communities that reject transparency’s epistemological underpinnings do not uniformly reject all of the concrete practices with which it is associated. It appears that research communities sometimes engage in similar practices of scholarly communication for different intellectual reasons.

With the broad terms of discussion established, the thirteen working groups engaged in wide-ranging consultations (Stage II) from September 2016 into early 2017, gathering the views of interested research communities on the questions at hand. These consultations unfolded mostly online and on the record, with each group facilitating its own discussion forum on qualtd.net. Over 500 additional comments were received across the thirteen working groups. Stage I and Stage II posts have been viewed a total of over 100,000 times,Footnote 12 suggesting interest in these discussions that extended well beyond those who actively participated. In an age of rampant online incivility, it is worth noting that the exchanges on the QTD forums, including those involving anonymous participants, were almost entirely respectful in tone and substantive in nature.Footnote 13 The full text of the Stage I and Stage II online deliberations is archived on the Harvard Dataverse.

At the close of the Stage II consultations, the working groups began drafting their reports, drawing on consultative input, broader disciplinary debates over transparency, and their own discussions. These reports were posted on the QTD website in September 2017, with public comment invited through the fall. In early 2018, groups embarked on revisions to their reports in response to online comments and feedback from the Steering Committee, with reports and executive summaries finalized in the summer and fall of 2018.

Forms of Transparency

We turn now to the substantive insights that emerged from the deliberations. The QTD’s terms of discussion left the meaning of research transparency open, allowing research communities to consider the merits of any form of information-sharing, in the process of research or publication, which they saw as worthy of examination. Taken together, the deliberations and the reports suggest a wide range of ways in which researchers (qualitative or otherwise) might choose to be transparent. In particular, scholars may be explicit about:

  • research goals, including a project’s intellectual, political or social objectives;

  • processes of generating evidence, including details of the sites of data-collection; the location of sources; the criteria according to which sites, sources, or cases were selected for analysis; how access to sources or human participants was obtained; the nature of any interactions with human participants (e.g., the questions asked); coding procedures or other means used to turn raw observations into analyzable data; and any mid-course changes in evidence-gathering plans and procedures;

  • analytic processes used to draw conclusions from the evidence by, among other things, identifying any assumptions or features of context on which the analysis rests; providing an account of the sequence in which evidence was analyzed and hypotheses were developed; discussing any iteration between the two; and reporting on hypotheses that failed to be supported by the evidence;Footnote 14

  • researcher positionality by explicitly reflecting on how the researcher’s position within power structures, especially vis-à-vis other research participants, might have influenced the kinds of evidence they have gathered and how they have interpreted it;Footnote 15

  • researcher subjectivity by explicitly reflecting on how the researcher’s life experiences and individual characteristics might have influenced the kinds of evidence they have gathered and how they have interpreted it;

  • risks to human participants/communities by providing, in presentations and publications, a discussion of the harms that their research or its dissemination might pose to those who participated in the study or their communities, and of how those risks were managed in the course of the project; and

  • conflicts of interest that the researcher(s) might have, or appear to have, including any vested interest in project outcomes, the sources of project funding, and relevant personal affiliations.

In addition to providing information about these aspects of the research process, scholars might also choose to engage in:

  • data sharing; researchers might make available to others elements of the original or “raw” source material that they have analyzed, such as the contents of textual sources or interview transcripts. Data sharing might take maximalist forms, such as the sharing of a complete interview transcript (possibly annotated with ethnographic observations), or more limited forms, such as the sharing of extended excerpts from a source text. Data might be shared within a book or article itself, whether in the body of the text or via a digital annotation, and/or posted on a digital platform or repository.

In addition, as a number of reports note, scholars working with human subjects must—separately—make choices about transparency toward research participants regarding the foregoing aspects of the research and dissemination process, including about the degree and nature of data sharing that will take place.

In short, the deliberations suggested a substantially more expansive understanding of “research transparency” than implied by recent disciplinary discussions. The DA-RT initiative, for instance, focused almost exclusively on data sharing, transparency about evidence-generation, and transparency of analytic process—all within the context of openness toward the consumers of a research product. The QTD reports point to a far wider array of features of the research process about which scholars might usefully share information with both research audiences and research participants. Importantly, the reports also point to and discuss a large number of specific examples of published qualitative political science scholarship that put these various forms of research explicitness into practice.

In the next three sections, we provide a synthesis of the deliberations. We begin by identifying the key benefits that qualitative scholars see as arising from different forms of research transparency. Next, we consider potential drawbacks of the pursuit of transparency—adverse consequences for the production of knowledge and risks to research participants—upon which the deliberations shed light. The essay turns then to a more fundamental critique of the concept of “research transparency” as incoherent and incompatible with the ontologies, logics of inquiry, and evaluative standards underpinning some forms of qualitative scholarship.

Potential Benefits of Transparency

Notwithstanding concerns about the drive for greater transparency, the QTD process revealed that many qualitative research communities—including those with serious concerns—believe that many forms of research explicitness promise intellectual and social benefits.

Greater Understanding

A number of working groups point out that providing clear and detailed information about research goals, the process of generating evidence, and the analytic process can help readers make sense of published research and its conclusions.Footnote 16 Knowing why a given piece of research was undertaken and how the findings emerged aids in understanding key claims and their implications. As the group examining research on vulnerable and marginalized populations pointed out, identifying risks to human participants and explaining how the researcher chose to mitigate those risks can help readers understand why the researcher got the results they did and how results might have differed if a different approach had been employed.Footnote 17 Moreover, scholars working in interpretivist traditions argue that stating and explaining the epistemological premises and intellectual goals underlying their methods and findings can help comprehension, especially among readers based in other traditions.Footnote 18

Gains to Research Assessment

Qualitative scholars quite broadly agree that various forms of transparency can improve research evaluation. These gains can operate on a number of levels.

In quantitative research, the sharing of data and code is often meant to enable replicability as an evaluative standard. For the research communities at the core of the QTD, by contrast, transparency contributes to research assessment primarily in ways unrelated to replication. By far the most commonly expressed view was that the provision of more information about the research process helps research audiences better identify potential biases or other threats to the validity of findings. Readers can more easily evaluate the quality of the evidence and assess how research context and researcher choices might have shaped or distorted conclusions more easily when scholars provide accounts of:

  • how or by whom textual sources were produced;Footnote 19

  • why particular sources were chosen for analysis;Footnote 20

  • how access was gained to field sites;Footnote 21

  • how sites were chosen and interlocutors recruited;Footnote 22

  • how views were solicited from human participants;Footnote 23

  • what information was shared with them;Footnote 24

  • what efforts were made to protect research participants in high-risk settings;Footnote 25

  • how the researcher’s social position might have shaped interactions in the field;Footnote 26

  • how funding sources might have affected participation in the study;Footnote 27 and

  • how inferences were drawn from observations.Footnote 28

A clear account of evidence-gathering and analytic processes can also help readers evaluate the risks of “cherry-picking,” even when the raw data themselves cannot be fully shared.Footnote 29 The deliberations on transparency in political violence research, in particular, generated intriguing ideas about how scholars can render their analytic methods more transparent and their empirical claims more susceptible to external scrutiny without the full sharing of data.Footnote 30 Finally, as the group examining process tracing points out, formalizing certain aspects of qualitative analysis—such as background beliefs and the probative value of evidence—can make it even clearer how conclusions have been derived from an array of observations and make it easier for readers to evaluate empirical findings.Footnote 31

Moreover, some QTD groups pointed to the contribution that data sharing can make to effective evaluation by enabling alternative interpretations: Access to the underlying data will allow readers to compare the authors’ interpretations to their own.Footnote 32

Further, and perhaps most fundamentally, clarity about research goals (e.g., are we trying to identify causal relations among variables or interpreting social practices?) and underlying epistemological commitments can help ensure that readers apply standards of assessment that are appropriate to the logic of inquiry being employed.Footnote 33

Thus, for many qualitative scholars, transparency aids evaluation in ways that do not turn on the notion of replicability. At the same time, some qualitative researchers view replication as an important tool of research assessment and see transparency as facilitating its operation. Replication is often understood in the relatively narrow senses of verification (examining whether we can generate the same finding by applying the same analytic steps to the same data) or reanalysis (examining whether results change when we apply different analytic procedures to the same data; see Büthe and Jacobs Reference Büthe and Jacobs2015b, 57f; see also Clemens Reference Clemens2017). Both for algorithmic qualitative approaches (such as QCA) and for methods that involve the coding of textual or audio-visual information (such as manual content analysis), verification and reanalysis are often viewed as important forms of evaluation, and scholars using such methods understand data sharing and transparency of the analytic process as critical to enabling these forms of replication.Footnote 34 Even in a non-algorithmic context, those seeking to evaluate claims grounded in textual sources may find it easier to assess those claims if they can read and analyze the original sources themselves, an evaluative process not unlike verification or reanalysis.Footnote 35

Moreover, transparency can contribute to replication in a broader sense. The working group on research with vulnerable and marginalized populations, for instance, argues that transparency about processes of generating evidence and analytic process can help scholars assess the reproducibility of a finding—using the same data-gathering and analytic procedures to study a different sample from the same population. Or it may allow them to extend the finding by testing it via the same methods with respect to a different population. Importantly, as this group points out, replication in these broader senses can be undertaken without access to the original data; but it does require information about how the evidence was collected and the analysis undertaken in the original study.

Benefits for the Research Process

The kinds of things that a researcher needs to do to make information available to readers and research participants might also improve the research process itself. As one group points out, keeping track of data-gathering procedures, organizing one’s evidence, writing down one’s analytic steps in a manner that would make them clear to readers helps researchers in their use and interpretation of sources and facilitates writing.Footnote 36

Public Goods

A number of research communities see benefits that extend beyond the particulars of a given study, including empirical and methodological gains for future researchers. Working groups examining textual forms of evidence, evidence drawn from research with human subjects, and content analysis point out that data sharing or making sources easily findable provides an evidentiary foundation on which future researchers can build, avoiding unnecessary duplication of effort and aiding the cumulation of knowledge.Footnote 37 Colleagues likewise point to the ways in which clear accounts of data-collection, coding procedures, and the logic of a methodology can serve as a resource for scholars who might consider employing such approaches in their own work.Footnote 38

Colleagues who undertake fieldwork in high-risk contexts point out spillover benefits of transparency toward human subjects: openness and honesty with research participants may help to build trust, enhancing the quality of data when future researchers return to these field sites.

Benefits to Human Participants

While some forms of transparency primarily benefit research producers and consumers, transparency toward human participants benefits participants themselves. Colleagues understand disclosure of the purposes and potential risks of participation in a research project as a fundamental ethical obligation to potential participants, underwriting their ability to make informed choices about participation.Footnote 39 Some colleagues working in settings of political violence see disclosure of funding sources as equally critical to informed consent.Footnote 40 And scholars conducting research with vulnerable or marginalized populations point out that sharing information can help to counteract the power imbalance that often exists between researchers and participants.Footnote 41

Limits to Transparency’s Benefits

The deliberations also brought to the surface a sense of the bounds on transparency’s benefits. In particular, for scholars in some research traditions, there are limited gains to making “raw” empirical materials—such as interview transcripts or field notes—accessible to readers. One key reason is context-dependence: transcripts and field notes would be difficult for readers to decipher without a deep understanding of the empirical setting or the countless observations and impressions that inform researchers’ interpretations but are never recorded.Footnote 42 A second reason is that not all research materials constitute raw “data” extracted from the world: field notes, for instance, are often more a record of the researcher’s evolving understanding of the subject.Footnote 43 Releasing such notes would do little to facilitate independent assessment or replication of the findings.

More fundamentally, the concept of “transparency” is seen as having little epistemological purchase for scholars working within non-positivist traditions. We discuss these deeper, philosophical objections later in the essay.

Potential Risks and Costs

Most QTD research communities understand transparency as involving tradeoffs among values. While seeing numerous benefits to research explicitness, most qualitative scholars also view certain kinds of openness in certain contexts as posing risks to those who participate in the research process and as involving costs for scholars and the field as a whole. These risks arise from two forms of transparency in particular: data sharing and transparency about processes of generating evidence.

Risks to Human Participants

As a large number of QTD contributors observed, sharing the data underlying qualitative research can pose serious risks to human participants in social research. In sharing raw data—such as full interview transcripts or field notes—researchers might inadvertently reveal the identity of human participants, violating their privacy and promises of anonymity or confidentiality and, possibly, data-protection commitments made to Institutional Review Boards (IRBs). In some circumstances, revealing participants’ identities may expose them to a range of potential harms—from shame or harassment to the loss of livelihood, imprisonment, torture, or even death. Such risks will tend to be especially pronounced in particular kinds of research contexts, such as violent or post-conflict regions or repressive political settings, and for populations that are politically, socially, or economically vulnerable or marginalized.Footnote 44 Yet, even participants who are not particularly “at risk” and who are living in stable, democratic settings may want their privacy protected and suffer stigmatization or other forms of social sanction if their verbatim statements and identities are made public.

One commonly proposed solution to this problem is anonymizing or otherwise scrubbing notes and transcripts of identifiers. Several of the reports, however, point out the limits of anonymization and the difficulty of determining which details might later allow “deductive disclosure.” Journalists, for instance, managed to use details of Alice Goffman’s narrative in On the Run to identify individuals whose identities Goffman thought she had protected.Footnote 45 In communities under close government surveillance, phrases used, events referenced, or even the date and time of an interview may be sufficient to reveal interlocutors’ identities.Footnote 46 In some situations, even information about how evidence was gathered—say, a detailed account of sampling and data-collection procedures—might provide sufficient information to identify individuals or communities that participated in a research project.Footnote 47

Informed consent is also frequently seen as a sufficient basis for sharing data derived from research with human participants. If participants have been informed about project goals, methods, and foreseeable risks of taking part; have not been subjected to any undue pressure; and have explicitly agreed that transcripts or field notes may be shared, one might reason, then there is no ethical quandary insofar as participants have made a free choice and have accepted any risks that might flow from this decision. Yet, several of the working-group discussions identified reasons why, and circumstances under which, informed consent may be insufficient as an ethical warrant for sharing verbatim transcripts or other forms of “raw” data drawn from interactions with human participants.

For one thing, the risks of sharing may be difficult for participants to foresee at the time that consent is granted. What may seem like a low-risk disclosure today might become high-risk in the future, as political and social conditions change.Footnote 48 Complicating matters further, data sharing can have implications not just for direct participants in the research process but also for other members of their community, who will typically never have the opportunity to grant or withhold consent.Footnote 49 In violent and post-conflict settings, moreover, full transcripts posted online might aggravate tensions by revealing the unspoken beliefs and values of some community members.Footnote 50 The meaningfulness of consent may also be undermined by resource and power differentials. Participants living in extreme poverty, for instance, might acquiesce in researcher requests in the hope of eventual material rewards, even when none are offered.Footnote 51 Some colleagues argued, further, that participant consent can never substitute for the researcher’s own risk-assessment; if the researcher is aware of risks that may have been unknown to participants, then sharing would be unethical, even if participants agreed to sharing.Footnote 52 Nor can IRB approval stand in for ethical judgment, particularly given that IRB rules typically cover only research subjects and not other individuals, such as local interpreters and field assistants, whose safety may be compromised if their identities were revealed.Footnote 53

Threats to Researcher Safety

Scholars often expose themselves to risk when undertaking intensive fieldwork. Some QTD groups called attention to the possibility that extensive data sharing might heighten risks to researchers, especially those operating in violent or repressive settings, by revealing details of field sites and about the communities or individuals with whom they interacted.Footnote 54

Consequences for Data Quality

Contributors to the QTD pointed to multiple ways in which routine data sharing—especially if uniformly required by publication outlets —might undermine the quality of the data that researchers are able to collect. Requiring subjects to consent to the public release of interview transcripts or field notes might introduce biases. Participants willing to allow the researcher to share their verbatim statements and accounts of their behavior may be systematically different from those who are unwilling, in ways closely related to the questions of interest.Footnote 55 Further, those who do take part are less likely to provide candid responses if they know that full transcripts will be made publicly available.Footnote 56 Researchers who post records of previous interactions at a field site may find future access barred.Footnote 57

Consequences for Topics Studied

From the perspective of some research communities, the adoption of comprehensive qualitative data-sharing requirements by leading political science outlets would threaten the discipline’s ability to address many important topics. It might discourage researchers from, for instance, undertaking research in settings of political violence or with vulnerable and marginalized populations, asking sensitive questions, or exploring research frontiers where the data a scholar collects cannot easily be made legible. These disincentives would likely hit junior scholars—whose career prospects hinge on early, high-status publications—especially hard. Scholars working at resource-poor institutions, in developing countries, or under illiberal political regimes would most acutely confront the disincentivizing and constraining effects of transparency norms, especially if institutionalized as strict requirements.Footnote 58 Alternatively, researchers might simply choose to publish their research in non-political-science outlets, effectively driving the qualitative study of sensitive topics from disciplinary journals.Footnote 59

Costs to Researchers

Colleagues further noted that the time required for the preparation of qualitative data for depositing could be considerable. Particularly labor-intensive aspects of the process may include the digitization of source materials, translation, and the scrubbing of transcripts and notes of potentially identifying information.Footnote 60 The deliberations also elicited concerns about the potentially inequitable distribution of these burdens. The costs of rendering data in shareable form may on average be higher for qualitative than for quantitative forms of evidence;Footnote 61 will be more difficult for junior scholars and those at less well-resourced institutions to bear;Footnote 62 and will be higher for scholars working on more sensitive topics and in higher-risk locations than for others.Footnote 63 At the same time, the working group on textual sources points out that data sharing is not a binary, “all or nothing” choice. Scholars might be able to mitigate many of the associated costs, for instance, by providing access to a select set of documents or transcript passages that are especially informative about an empirical claim.Footnote 64

Other Costs and Limitations

QTD working groups identified a number of other tradeoffs or constraints, including the loss of exclusive use of the data by researchers who may have invested heavily in its generation,Footnote 65 and copyright and other legal restrictions on dissemination of documents.Footnote 66 Colleagues also pointed to ways in which, beyond a certain point, greater transparency may actually undermine, rather than enhance, understanding. Excessively long and complex transparency appendices, for instance, may obscure the most important features of the research process.Footnote 67 Similarly, methods featuring extremely high levels of analytic explicitness—such as formal, Bayesian process tracing—may generate less readable and comprehensible text than do more informal, narrative approaches.Footnote 68

To summarize the foregoing discussion: For many qualitative scholars, the pursuit of transparency involves a set of potential tradeoffs, between the intellectual and social value of different forms of research explicitness, on the one hand, and the risks that these practices might entail for participants and the costs that they may impose on researchers and on the quality of the research process, on the other hand.

Philosophical Objections

Other QTD participants, however, fundamentally question the usefulness and desirability of research transparency. From the perspective of some qualitative research communities, “research transparency” is an intellectually incoherent notion grounded in a narrow and questionable set of presumptions about how knowledge is produced. The transparency agenda also threatens to sideline scholars who do not view data as “extractable.” For these scholars, evidence is not like raw material, inertly available for removal and unmediated by a broader social environment. “Reality” does not exist independently of the observer and the socio-political worlds within which she operates.

A detailed discussion of this critique can be found in the two reports on epistemological and ontological priors (I.1a and especially I.1b), the report on interpretive methods (III.2), and the ethnography report (III.3). We highlight key issues here.

Transparency’s Philosophically Contingent Meaning

Prominent proponents of transparency in political science have referred to research transparency as a universal “meta-standard”Footnote 69 that has different particular implications for different scholarly approaches. From this perspective, all logics of social inquiry share the meta-standard of research transparency; achieving it may merely require scholars to take different specific steps depending on the particular methods they employ. In contrast to this view, numerous QTD participants and the reports of working groups I.1a and I.1b point out that the concept of research transparency is inextricably bound up with a particular understanding of knowledge-production—an understanding that may fit well with some logics of social inquiry but is incompatible with others.

Data-Analysis Dichotomy

Central to this incompatibility are differences regarding the relationship between empirical information and analysis. The concept of research transparency, as articulated by its advocates in the discipline, is grounded in a model of empirical social inquiry in which the researcher collects evidence, or “data,” and then subjects that evidence to some set of analytic procedures.Footnote 70 This account, however, is not coherent from the perspective of many non-positivist research traditions. A key problem is the implied separability of evidence and analysis. For interpretivists, all observation is theory-laden. Theoretical presuppositions mediate perceptions, organize observations, and demarcate which stimuli qualify as evidence. Data, in some modes of interpretive analysis, are also fundamentally relational, encompassing both what was observed and the researcher’s own reactions to interlocutors and field sites.Footnote 71 For most interpretivist scholars, therefore, evidence is never “raw”; and analysis and interpretation are not performed on evidence but are constitutive of it.

One place where this problem takes concrete form is in sharing ethnographic field notes. As the ethnography working group puts it, field notes are not an unfiltered documentation of events but “pieces of a long process of sorting out what the ethnographer thinks her field interlocutors understand to be happening and how she interprets their understandings.”Footnote 72 Ethnographic researchers “encounter, absorb, and process” much more information than field notes could ever capture, including deep knowledge of context. Field notes cannot be treated as a comprehensive transcript of the evidence since they necessarily omit a great deal of the observations and information that shape the researcher’s interpretation. As the ethnography working group points out, sharing ethnographic field notes might be informative—perhaps about the biases that shaped the researcher’s observations and interpretations. But viewing this action as transparency or data sharing would misconstrue the process of inquiry through which those records were generated.

Misleading Ocular Metaphor

Relatedly, for many interpretivists, the concept of “transparency” is problematic in that it promises a form of knowledge that is fundamentally out of reach. The term rests on an ocular metaphor, implying the possibility of seeing through to gain access to things in themselves or things as they really are. Footnote 73 From key non-positivist epistemological perspectives, such as presupposition theory, the clarity of vision implied by the metaphor is inherently and inevitably illusory. While methods can be explicated and assumptions outlined, we never have full, conscious access to the deep theoretical constructs that structure our perceptions and understandings.Footnote 74 Importantly, this is not a disagreement about the value of effective research communication. Interpretive ethnographers, for instance, routinely provide detailed explanations of how sources and field sites were selected and thick descriptions of their engagements with interlocutors. Central to much interpretive analysis, moreover, are forms of information-sharing—such as explicit reflection on the researcher’s subjectivity or positionality—that, arguably, involve more radical candor than envisioned by mainstream “open science.”Footnote 75 But to equate the explication of a research process with “transparency”—with an unveiling of the scaffolding that undergirds conclusions—is to misconstrue the model of knowledge production on which many interpretive scholars operate.

Value of Transparency for Research Assessment

The deliberations also exposed a related divergence regarding the value of transparency for the assessment of scholarly work, particularly regarding the relevance of replicability to research evaluation. Replicability makes sense as an evaluative standard from a hypothetico-deductive perspective in which social inquiry involves the use of evidence to falsify claims about observer-independent phenomena in the world. In this context, sharing data and analytic procedures aids assessment by facilitating some forms of verification and reanalysis. By contrast, enabling others to retrace the researcher’s steps as part of an assessment of her conclusions makes little sense from a non-positivist perspective in which all scientific observation and interpretation is understood as mediated by the observer’s point of view—by her theoretical presuppositions, her values, her position within societal power structures.Footnote 76

From the latter perspective, the evaluation of scholarly work and its findings does not turn on whether we can generate the same result via the same methods using the same evidence. Nor, for that matter, does assessment involve gauging whether research procedures might have biased results away from the “right” answer. Assessment in interpretivist and other non-positivist scholarship operates on a different set of logics. The QTD working group reports on non-positivist philosophies of knowledge and interpretive methods detail a wide range of alternative ways in which a theoretical explanation or interpretation may be assessed, depending on the methodology being employed and the logic of evidence and argument within which it operates.Footnote 77 Interpretivists seeking to evaluate an evidence-based claim might “interrogate existing categories, question how boundaries have been drawn between one phenomenon and another, challenge the ‘operationalization’ of terms, probe omissions and distortions, examine metaphors and analogies that structure understanding, develop new concepts, introduce new modes of argument, and appeal to different registers of experience”Footnote 78 —none of which involves asking how close the claim comes to an observer-independent truth. In the view of interpretivist participants in the QTD, the logics of and prerequisites for these diverse forms of scrutiny bear little relation to the notion of research transparency.

The Politics of Knowledge

QTD participants working in non-positivist research traditions, moreover, expressed grave concern about the longer-term political implications of the transparency agenda, especially insofar as it involves the articulation of new norms or even requirements by professional associations, editors, or funders. To the extent that transparency’s conceptual underpinnings are consonant with some knowledge-production frameworks while being incompatible with others, its elevation as a broad standard, from this point of view, threatens to privilege some modes of analysis and marginalize others. The adverse consequences include “circumscribing the subject matter appropriate to ‘science,’ narrowing the range of analytic practices accredited as empirical inquiry, establishing problematic norms for assessing political inquiry, identifying basic principles of practice for political scientists, and validating one ethos for all scholars.”Footnote 79 And allowing non-positivist approaches to simply register as an exception to broad transparency norms would, the interpretivist group argues, serve only to mistakenly mark these methodologies as intrinsically incapable of meeting disciplinary standards of research integrity.Footnote 80

Implications for Research Practice

Each working group decided on its own whether to advance specific recommendations for research practice, depending on the degree of consensus within the relevant research communities, as well as the group’s sense of the desirability of establishing transparency-related scholarly norms for their particular research community. Most reports stop short of articulating firm and specific rules. This choice emerged partly from the fact that, notwithstanding the QTD’s differentiated structure, numerous groups were grappling with quite varied forms of research activity. For instance, scholars conducting research on authoritarian or repressive political regimes (the remit of working group IV.1) might employ a broad range of methodological approaches, which might warrant diverse openness practices.

Across the reports, two principal exceptions stand out against a general reluctance to promulgate rules. The working group on research ethicsFootnote 81 distilled from its deliberations what it sees as a set of consensus principles for judgment- and decision-making at the interface between research transparency and human-subjects protection. Chief among these principles is the ethical primacy of researcher obligations to protect human participants, even when such protection must come at the cost of reduced transparency toward research audiences. The group also proposes “reflexive openness” as a generalized approach, calling on scholars continually to reflect on the ethical implications of their research activities; to engage and share information with human participants about aspects of the research that could affect them;Footnote 82 and to provide reviewers, editors, and readers with a reasoned account of their ethical practices. The reflexive openness standard calls on editors, reviewers, and funders to evaluate researchers’ decisions to share or withhold information and data based on these accounts, grounded in the nature and context of inquiry, while granting a high degree of deference to researchers’ ethical judgments about whether and what to share.Footnote 83

The other document articulating a clear set of transparency criteria, with broad support across the community of practitioners, is the QCA report.Footnote 84 The report of this group, tasked with examining a single, well-defined method that operates via a relatively standardized procedure, itemizes specific aspects of the analytic process that ought always to be disclosed in QCA research—such as the method of calibration employed, cases’ membership scores, and the decision rules used in truth table analysis.Footnote 85

Even where they do not propose new transparency standards, a number of reports—such as, those on text-based sources, comparative methods and process tracing, and content analysis—outline relatively clear expectations about the kinds of information that scholars in a given research tradition should generally seek to provide.Footnote 86 Further, most reports identify a wide range of practices that particular qualitative research communities consider to be valuable and achievable at reasonable cost. We itemize some of these practices later in this section.

Reading across individual reports, moreover, reveals a number of key patterns. These include considerable consensus on the value of several forms of explicitness in qualitative research as well as two principal areas of disagreement.

To begin with the areas of disagreement, as implied by the earlier discussion of epistemological and ontological perspectives, there appears to be a fundamental divide between qualitative researchers who see value in at least some logics and practices of transparency, on the one hand, and qualitative researchers who reject the very concept of transparency as incompatible with their understanding of knowledge production, on the other hand.Footnote 87 This fault line seems to map to some degree onto the difference between broadly positivistic (or hypothetico-deductive) and interpretive modes of analysis—though the alignment is far from perfect. The ethnography working group, for instance, engaged in a wide-ranging exploration of the meaning of “openness”—a concept, arguably, not too distant from an expansive notion of transparency—in ethnographic research.Footnote 88 Moreover, several working groups with epistemologically diverse memberships registered support for multiple forms of transparency.Footnote 89 We would nonetheless identify as a key finding of the QTD exercise that there exists no meta-standard of research transparency that can operate across all forms of evidence-based qualitative inquiry.

The other main area of disagreement, even among research communities that embrace the overall value of research transparency, is the advisability of data sharing. Importantly, there was universal agreement that the sharing of qualitative data should not be uniformly required, given the considerable costs and risks of data sharing for some forms of research.Footnote 90 Nonetheless, qualitative research communities vary widely in the degree to which they view data sharing as the presumptively appropriate practice. At one end of the spectrum, working groups III.4 and III.5 recommend that scholars using QCA and content analysis, respectively, make the qualitative source material used for such analyses accessible wherever ethical considerations, confidentiality agreements, and legal and copyright restrictions do not prohibit doing so. The groups on text-based sources and on comparative methods and process-tracing similarly argue in favor of sharing raw data to the extent that doing so is consistent with ethical obligations and feasible at reasonable cost. The text-based sources group further argues that, in most cases, it should not be too onerous for scholars to provide extended source excerpts to back up key claims, especially if scholars plan to do so from the outset of a research project. By contrast, groups focused on human-subjects research and higher-risk contexts (I.2, II.2, IV.2, and IV.3), while recognizing intellectual value in sharing some evidentiary materials, argue against any default practice of data sharing and in favor of great caution in considering the implications of this strategy for human participants and their communities. Their reports also point readers to selective forms of access, such as the reproduction of extended excerpts, as a more practicable form of ethical data sharing than the posting of full (anonymized) transcripts or field notes. Finally, both the ethnography group (for epistemological and ethical reasons) and the authoritarian-contexts group (for ethical reasons) take strong positions against sharing original or “raw” data of any kind. It is thus difficult to identify a single data-sharing principle or presumptive expectation that would be understood as workable for all evidence-based qualitative research.

Beyond their discussion of the merits and risks of data sharing, a key contribution of the QTD reports is to draw attention to and propose a number of other concrete strategies through which researchers and scholarly communities can advance the credibility and evaluability of empirical qualitative claims without abrogating ethical obligations. Among these are:

  • when quoting from a response to an interview question, sharing the complete response in order to provide wider context, while minimizing the deidentification challenges involved in sharing an entire transcript;Footnote 91

  • where transcripts cannot be shared, reporting the number of interviews consistent and inconsistent with a proposed hypothesis;Footnote 92 and

  • the use of Annotation for Transparent Inquiry, a technology developed by the Qualitative Data Repository at Syracuse University, that allows researchers to layer a citation, analytical note, source excerpt, and possible link to a source over the relevant passage in the article text.Footnote 93

The text-based sources group argues, further, that scholars should routinely provide sufficient information about the location of publicly available sources to ensure that others can find them, and should specify the particular parts of any source that are being drawn upon (e.g., by including page numbers).Footnote 94 These are practices likely to be valued widely by political science researchers, regardless of methodological or epistemological orientation.

We also see broad, explicit agreement among qualitative research communities about the importance of other general forms of openness. We note again, in this context, that some qualitative research communities grounded in interpretivist or non-positivist epistemologies reject the concepts of research transparency, openness, and explicitness from first principles.Footnote 95 The discussion here thus focuses on those groups whose deliberations did not center on a fundamental critique of transparency as a frame for thinking about research communication:

Transparency about generating evidence: Across a wide range of qualitative research traditions, there is a clear consensus on the vital importance of providing readers with detailed accounts of how the evidence used in a study was generated. QTD groups specify in their reports what this involves for their type of research, provide numerous examples, and point to a large number of published works that pursue this form of explicitness effectively.Footnote 96 Working groups also identify a substantial number of specific practices, both commonplace and innovative, in this domain and make a number of novel proposals for how this form of transparency might be advanced. The reports suggest, for instance, that transparency about the generation of evidence might imply:

  • providing information not just about the production of evidence that researchers themselves generated but also about the origins of sources that pre-date the study, such as textual materials, making explicit the scholar’s critical use of his/her sources;Footnote 97

  • sharing the questionnaires used or the questions asked when using interview or survey responses as evidence;Footnote 98

  • providing an interview table containing key metadata for all interviews conducted;Footnote 99

  • reporting divergences between planned and actual data-collection processes;Footnote 100

  • specifying, for small-n analysis, what was known about the cases at the time of their selection or identifying those cases that were almost chosen for analysis but ultimately not included;Footnote 101

  • recording and posting deliberations about coding choices for content-analytic work;Footnote 102 and

  • providing the foregoing kinds of information in a dedicated appendix if space constraints or readability considerations do not allow for inclusion in the main text.Footnote 103

At the same time, the report on research ethicsFootnote 104 makes clear that transparency about evidence-generation is not without potential complications: as noted earlier, in some contexts, a detailed account of fieldwork sites might be sufficient for well-informed actors to identify participants or their communities.

Transparency about analytic process: We observe similarly broad agreement among qualitative scholars on the importance of explicitness about analytic processes. Footnote 105 Again, the particular form that this type of transparency may take varies across research approaches. Among the specific practices discussed in the reports are:

  • explaining how particularities of case context or background knowledge shape the interpretation of evidence;Footnote 106

  • noting the steps taken to challenge one’s own premises or early hunches in the course of a project;Footnote 107

  • reporting when initial hypotheses were dropped or modified—or when new hypotheses were developed—in light of the evidence;Footnote 108

  • explicitness about whether an analysis aims for generalization beyond the cases being examined;Footnote 109 and

  • for some methodologies, formally modeling or explicitly mapping the links between evidence and inference.Footnote 110

Transparency about risks to human participants/communities: Across the groups focused on research involving human participants, there was broad agreement and emphasis on the value of researchers conveying to their audiences what risks their interlocutors faced as a result of participation in the research, what information was or was not shared with participants, and what steps the researcher took to protect them and their communities.Footnote 111

Transparency toward human participants: Most groups focused on human-subject-oriented research likewise identify transparency toward human participants as foundational to ethical scholarly practice.Footnote 112

Openness about researcher positionality and researcher subjectivity: While the concepts have not featured prominently in discussions of research transparency in political science, all QTD reports focused on research with human participants highlight the value of explicit discussions of how scholars’ positionality and subjectivity might have shaped their interactions in the field or their interpretations of the evidence.Footnote 113 And while such reflexivity is often associated with interpretive research, positivist scholarship would similarly benefit from such discussion insofar as researcher positionality might bias survey or interview responses.Footnote 114

Alongside these areas of broad, explicit agreement, we also note that there was no disagreement about—though also less discussion of—the value of two other forms of transparency: transparency about research goals and transparency about conflicts of interest. The ethnography group elaborates a strong case for explicit discussion of what a piece of research aims to explain, explore, or uncover.Footnote 115 The importance of transparency about potential conflicts of interest features prominently in the report on research in violent settings,Footnote 116 and in cross-group conversations there appeared to be a wide consensus on the desirability of such explicitness. Further, the interpretive methods working group proposed turning the demand for transparency on the profession itself, by interrogating scholars’ often-unstated ideological presumptions, such as a belief in science as a method for uncovering objective truths and a commitment to preserving liberalism.Footnote 117

Implications for Editors, Reviewers, Funders, and Professional Bodies

For the most part, the QTD and this overview essay have focused on questions confronting researchers. Yet journal editors and publishers, reviewers, funding agencies, and professional associations also need to grapple with the rationale for, costs and limits of, and practicalities of making scholarship transparent. What do the outcomes of the QTD process mean for their policies and practices?

The QTD was not intended to—and did not—culminate in the elaboration of a set of qualitative transparency rules that journals, presses, or other professional bodies might adopt. The challenge of identifying common expectations or criteria for qualitative scholarship is vastly more complex than for quantitative work, given the tremendously variegated nature of evidentiary forms and logics of inquiry involved. This is true even within most research traditions. As the reports make clear, there are too many ways of understanding and doing ethnography or process tracing, for instance, to itemize a comprehensive set of conditional openness procedures that researchers ought to undertake. QTD participants also drew attention to the time-bound nature of scholarly expectations: research methodologies are a focus of ongoing innovation, and practices considered normative today may come to be seen as inadequate or problematic tomorrow.

At the same time, as the discussion in the foregoing section makes clear, the deliberations do suggest several general types of information that are reasonable for editors, reviewers, and funders to look for in most qualitative empirical research outputs. In particular, it appears broadly agreed across most qualitative communities that it is fair to expect authors to provide considerable information about (1) how the evidence was generated, (2) how the analysis was conducted, and (3) how risks to human participants were managed.

When it comes to making “raw” evidence available, the paramount message from the deliberations is that data sharing calls for differentiated judgment, rather than a general obligation to share the maximum amount of materials. Editors, reviewers, and funders should consider what precisely would be gained by asking an author to share their source materials; how much needs to be shared in order to reap these gains; what risks such sharing might pose to those whom the researcher may have an ethical obligation to protect; and how time-consuming and costly it would be for the author to make the source materials meaningfully accessible to others.Footnote 118 To a great degree, the gains to data sharing will depend on the methodology underpinning a given study. Providing readers with access to at least parts of the underlying evidentiary record is considered beneficial to understanding and assessment for a number of qualitative approaches, including QCA, content analysis, and process tracing. On the other hand, the idea of sharing one’s “data” is not an intellectually coherent notion for ethnographers or practitioners of other interpretive methods.

Moreover, as the working group on research ethics points out, editors must, in making data-sharing requests of authors, also take into account the steep informational and ethical asymmetry between editor and author. It will generally be the author who has the firmest grasp of the potential harms that might arise from the disclosure of information, given the particularities of the research context; and it is, ultimately, the author who has incurred the moral obligation to protect participants. While authors should be required to justify their choices about whether or not and what to share, their reasoned arguments on this matter should receive strong deference. Moreover, editors, funding agencies, and reviewers ought to avoid even the appearance that compromising on those ethical obligations is expected or might improve publication or funding prospects, lest researchers feel pressured either to cut ethical corners or to avoid studying sensitive topics altogether.Footnote 119 Likewise, the ethics group calls for revising the APSA Ethics GuideFootnote 120 to vest the individual researcher with primary responsibility for managing the ethical dilemmas confronted by her scholarship.

Further, for those concerned about evaluability in the absence of data sharing, the QTD Reports creatively suggest a number of alternative ways in which authors might be reasonably asked to shore up the credibility of their claims, from providing more extended excerpts or furnishing meta-data and interview protocols to constructing summaries of the balance of evidence or adducing corroborating clues in publicly available sources.

Finally, the QTD reports—many of which focus on distinct research methods and settings—can serve as a resource for editors or funders seeking to further develop evaluative criteria that are appropriate to the form of evidence, logic of inquiry and contextual circumstances with which a study engages. The reports represent articulations of the considered understandings of research openness held by a wide range of qualitative research communities. They thus can help ensure that assessments of qualitative research make sense within the intellectual traditions in which authors are operating.

The summaries that follow this essay, and the fourteen full reports to which they are linked, are a rich source of information about the key considerations that ought to factor into transparency decisions in particular research situations—and an excellent guide to the kinds of questions that editors and reviewers ought to be asking.

Summaries of the Final Reports of the QTD Working Groups

Epistemological and Ontological Priors: Varieties of Explicitness and Research Integrity (Working Group I.1, Subgroup A)

—Marcus Kreuzer and Craig Parsons

The original DA‑RT transparency agenda aspired to be “epistemically neutral,” hoping that open and explicit practices would promote “cross-border understanding” among research traditions, as Lupia and Elman wrote in their 2014 symposium in PS. This claim to neutrality provoked challenges during the Qualitative Transparency Deliberations. Some objected to DA‑RT’s notion of transparency as narrow and called for broader conceptions of openness or explicitness. Others rejected the very goal of DA‑RT, questioning the value of seeking newly explicit standards. This line ran through our Working Group on Ontological and Epistemological Priors, to the point that we opted to write two reports. Our contribution, which is more optimistic about an inclusive version of DA‑RT, should therefore be read alongside the one by Timothy W. Luke, Antonio Y. Vázquez-Arroyo, and Mary Hawkesworth.

Our full report inventories the views of transparency or explicitness highlighted in the QTD discussions. It notes that scholars’ views of transparency reflect different understandings of ethics, the history of science, sociology of knowledge, and cognitive psychology that inform the overall integrity of the research process. This brief summary sketches five epistemological views of transparency in research:

  • A frequentist/experimentalist epistemology undergirds orthodox social science. Important differences exist between the frequentist logic of large-N observational science and the manipulation-of-controls logic of experiments, but they share key epistemological views. Linked to ontologies that posit underlying generalities in politics, they trace knowledge production to systematic, typically replicable use of systematic controls (either manipulated or observed across cases). To the extent that these methods are logical and transparent, scholars in this tradition suggest, we can see through complexities and ambiguities to reveal useful generalizations. They therefore generally treat transparency in the final testing stage as sufficient for confidence in valid results. They do not see such test results as simple Truth, and they acknowledge that the validity of those results is conditional on the work done during earlier stages of knowledge production; most accept Lakatosian-style caveats about the conventionalist foundations of research and the tentative nature of conclusions. Still, they emphasize testing that is evaluated against specific transparency criteria as the key to check for subjectivity and errors.

  • The Bayesian/process-tracing tradition views knowledge production as more conditional and contextual, advocating broader transparency norms. Its core principle is that the confidence we derive from any evidence reflects what we know from prior research. This prioritizes the scholarly and empirical context for any claim. Interpretation of evidence depends on careful analysis of preceding knowledge claims and consideration of alternative explanations. It also requires careful analysis of the concrete context of evidence. Transparency concerns therefore extend into pre-testing stages of research. If we are transparent about preceding knowledge and how we factor new evidence into probabilistic confidence in new claims, this tradition suggests that we will see the world reasonably clearly.

  • The historical tradition overlaps with Bayesianism but privileges context more deeply. It, too, sees knowledge as conditionally evolving, but emphasizes a level of causal complexity that frustrates any strict logic of Bayesian updating. Given complex temporal sequencing or other asymmetrical or contingent interactions, probabilistic estimates of confidence in any claim may be impossible. It thus favors the more mechanistic and deterministic logic of detectives solving a crime, in which the researcher seeks a whole chain of evidence that must be observable to support a hypothesis (or to convict a murderer). Scholars in this tradition tend to endorse transparency—the jury must see the detective’s evidence—but worry about its practical limits. In principle, their complex stepwise logic extends transparency concerns far back into research design. In practice, publicizing all the detective’s steps may be cumbersome and contribute little to compelling results.

  • The modern constructivist tradition overlaps with historical thinking but adds an ontological emphasis on social construction. In positing that human-made ideational filters may significantly shape action, these scholars characterize knowledge production as still more conditional and contextual. They suggest that investigating people is not like investigating a house fire or a mechanical failure. Materials burn similarly under similar conditions, but different people may act very differently. Modern constructivists therefore prioritize especially thick evidence of action and rhetoric and analytic attention to the meaning of evidence in its human context. Like the preceding traditions, they aspire to tentative truth claims built on logic and evidence—their position, roughly, is that politics is demonstrably socially constructed—and formulate their research processes explicitly in open debate with orthodox traditions. In a socially-constructed world, however, transparency concerns may be better translated as “openness.” At best, scholars can explicate how they look through a glass darkly.

  • The interpretivist tradition, discussed in greater detail in our working group’s other subgroup report and the separate report by the working group on interpretive methods, shares modern constructivism’s ontological emphasis on social construction. It suggests, however, that social construction further alters how research works. Since scholars (like actors) only access the world through social constructs, we cannot directly debate how truth claims correspond to the world. Instead, interpretivists pursue empirical research to construct distinctly coherent narratives and engage in a mind-opening confrontation of perspectives. This rejection of a correspondence theory of truth makes the language of transparency seem inappropriate, as research does not help scholars see the “real” world. Our working group disagreed about the further implications of this position for broader notions of research explicitness. Our subgroup argued that even—or especially—in a world that cannot reveal itself to us, scholarly contributions depend on communicating explicitly how we arrive at a distinct narrative. Interpretivists construct their distinct narratives with logic and empirics and employ well-elaborated methods like ethnography or genealogy. Thus interpretivist research, too, may gain from greater explicitness, though those gains certainly have practical limits, as they do in other research traditions.

We hope that the QTD will encourage all scholars to probe those limits and potential gains.

Epistemological and Ontological Priors: Explicating the Perils of Transparency (Working Group I.1, Subgroup B)

—Timothy W. Luke, Antonio Y. Vázquez-Arroyo, and Mary Hawkesworth

The discipline of political science encompasses multiple research communities, which have grown out of and rely upon different epistemological and ontological presuppositions. Recent debates about transparency raise important questions about which of these research communities will be accredited within the discipline, whose values, norms, and methods of knowledge production will gain ascendency, and whose will be marginalized. Although the language of “transparency” makes it appear that these debates are apolitical, simply elaborating standards that all political scientists share, the intensity and content of recent contestations about DA-RT, JETS, and QTD attest to the profoundly political nature of these methodological discussions.

This report traces the epistemological and ontological assumptions that have shaped diverse research communities within the discipline, situating “transparency” in relation to classical (Aristotelian), modern (Baconian), and twentieth-century (positivist, critical rationalist, and postpositivist) versions of empiricism. It shows how recent discussions of transparency accredit certain empirical approaches by collapsing the scope of empirical investigation and the parameters of the knowable. And it argues that “transparency” is inappropriate as a regulative ideal for political science because it misconstrues the roles of theory, social values, and critique in scholarly investigation.

As a form of human knowledge, science is dependent upon theory in multiple and complex ways. Theoretical presuppositions shape perception and determine what will be taken as a “fact”; they confer meaning on experience and control the demarcation of significant from trivial events; they afford criteria of relevance according to which facts can be organized, tests envisioned and the acceptability or unacceptability of scientific conclusions assessed; they accredit particular models of explanation and strategies of understanding; and they sustain specific methodological techniques for gathering, classifying, and analyzing evidence. Theoretical presuppositions set the terms of scholarly debate and organize the elements of “scientific” activity. Moreover, they typically do so at a tacit or preconscious level, and it is for this reason that they appear to hold such unquestionable authority.

Recognition that “facts” are theoretically constituted calls into question basic assumptions about empirical “reality” and the “autonomy” of facts, challenging the “givenness” of data and the idea that reality is ontologically distinct from the theoretical constructs advanced to explain it. Recognition that “facts” can be so designated only in terms of prior theoretical presuppositions implies that any quest for an unmediated reality is necessarily futile. Theoretical presuppositions organize and structure research by determining the meanings of observed events, identifying significant problems for investigation and indicating both strategies for solving problems and methods by which to test the validity of proposed solutions. The theoretical constitution of facts challenges the correspondence theory of truth. There are no “autonomous facts” that can serve as the ultimate arbiter of scientific theories. Science is a human convention rooted in the practical judgments of a community of fallible scientists struggling to resolve theory-generated problems under specific historical conditions.

That there can be no appeal to neutral, theory-independent facts to adjudicate between competing theoretical interpretations does not mean that there is no rational way of making and warranting critical evaluative judgments concerning alternative views. Indeed, the belief that the absence of independent evidence necessarily entails relativism is itself dependent upon a positivist commitment to the verification criterion of meaning. Only if one starts from the assumption that the sole test for the validity of a proposition lies in its measurement against the empirically “given” does it follow that, in the absence of the “given,” no rational judgments can be made concerning the validity of particular claims. Once the “myth of the given” has been abandoned and once the belief that the absence of one invariant empirical test for the truth of a theory implies the absence of all criteria for evaluative judgment has been repudiated, then it is possible to recognize that there are rich rational grounds for assessing the merits of alternative theoretical interpretations.

Confronted with a world richer than any partial perception of it, scientists draw upon the resources of tradition and imagination in an effort to comprehend the world before them. Operating within limits set by fallibility and contingency, scientists employ creative insights, practical reason, formal logic, and an arsenal of conventional techniques and methods in their effort to approximate the truth about the world. But their approximations always operate within the parameters set by theoretical presuppositions; their approximations always address an empirical realm that is itself theoretically constituted. The underdetermination of theory by evidence ensures that multiple interpretations of the same phenomena are possible.

For this reason, the politics of knowledge is a legitimate focus of analysis: the analytic techniques developed in particular cognitive traditions have political consequences that notions of transparency render invisible. In circumscribing the subject matter appropriate to “science,” narrowing the range of analytic practices accredited as empirical inquiry, establishing problematic norms for assessing political inquiry, identifying basic principles of practice for political scientists, and validating one ethos for all scholars, methodological strictures of DA-RT and JETS sustain particular modes of intellectual life and marginalize others. These concerns lie at the core of objections to transparency as a regulative ideal for all political science research.

As a scholarly discipline, political science encompasses multiple analytic approaches that are theory-laden and methodologically-driven. But no categories or concepts ever fully capture or exhaust the precategorical. Narrow methodological prescriptions associated with transparency miss this critical point. By universalizing notions that knowledge is “discovered” and truth “revealed” through systematic observation and testing and the replication of findings, transparency norms mask diverse theoretical presuppositions and particular institutional ideologies operating within political science itself. Purportedly universal notions of transparency ignore the sociality of perception, the theoretical constitution of facts and the politics of representation. Ironically then, in its quest for truth, current transparency initiatives threaten to invest practices appropriate to positivist research traditions, which have been problematized and criticized by other research traditions, with a kind of ideological power to define “Political Science” in a narrow fashion, which delegitimizes other modes of inquiry that do not share its epistemological assumptions.

Research Ethics and Human Subjects: A Reflexive Openness Approach (Working Group I.2)

—Lauren M. MacLean, Elliot Posner, Susan Thomson, and Elisabeth Jean Wood

The foremost ethical obligation and therefore the first duty of scholars is the ethical treatment of people affected by our research, particularly its human subjects.

Our working group’s report discusses the implications of the primacy of the ethical treatment of human participants—our term for “human subjects”—for empirical research in political science. Although research ethics encompasses a broader range of issues (including honesty, integrity, competence, and the respectful treatment of students and colleagues, among others), we focus on the primacy of human participants both because the human costs of violating this obligation are likely much higher than, for example, plagiarism, and because this principle may conflict with evolving norms of transparency in the social sciences. We acknowledge that “transparency” frequently has benefits, but nonetheless focus on the tensions between it and the primary obligation to human subjects and other ethical obligations in a wide range of research contexts, including settings of violence and repression.

To support our ethical positions, we advance a broad and distinct approach of “reflexive openness” that incorporates sustained reflection on the ethics of research practices, what ethnographers term “reflexivity.” This approach has three important elements. First, it promotes ongoing reflexivity by the author vis-à-vis her research participants. Second, it encourages all scholars to provide a reasoned ethical justification of their research practices, especially when seeking to publish their analysis. Finally, the ethical expectations guiding reflexive openness are universal, and thus the approach is inclusive of researchers regardless of subfield, methodology, topic, and empirical context.

In our report, we first review the history of prioritizing the ethical treatment of human participants. In the report’s second section, we highlight challenges and tensions in conducting ethical research. In the third and fourth sections, we uncover potential ethical risks of adopting narrow notions of transparency that do not adequately protect human subjects and other research participants and discuss likely inadvertent, yet disturbing, long-term consequences for the production of knowledge. In the fifth, we suggest several benefits of adopting the alternative reflexive openness approach to transparency, a broader and more appropriate one that places ethical practices at the core of the research endeavor. A principal benefit is that all scholars conducting research involving human participants should give reasons for the extent to which they can describe their processes of generating evidence (omitting place names, for example) and of analyzing their data, and for the extent to which and how they can ethically share their data. In the sixth section, we provide principles and strategies for researchers to manage ethical dilemmas that arise within diverse settings. In the final section before the conclusion, we propose policy reforms and institutional changes to support ethical research in the discipline. This includes consideration of what happens when the researcher and her editor or reviewers disagree on what can ethically be shared. In particular, we assess why researcher judgment in protecting human participants should trump editorial exemptions.

As political scientists who have conducted research with human participants in both democratic and authoritarian systems, as well as in conflict and post-conflict settings, we have faced a wide range of ethical dilemmas. Our full report draws on our many decades of research with human participants in contexts that vary in terms of the geography, level of economic development, political stability, and regime type. In our respective projects, we have employed a diversity of research methods, including archival research, interviews, ethnographic observation, oral histories, focus groups, surveys, and formal modeling. Our report also draws on posts by scholars to the online Qualitative Transparency Deliberations and informal feedback received offline, especially from junior colleagues and graduate students who preferred to express their thoughts privately.

We start with the assumption that ensuring the ethical treatment of human research participants is the primary duty of every scholar—an inviolable obligation that supersedes all others except in the most extraordinary of circumstances. We then systematically evaluate the risks, dilemmas, and consequences of a narrower notion of transparency as compared to the reflexive openness approach. We argue that a shift to the latter approach yields great potential benefits and should be the foundation of research methods and practices. We urge political scientists to strengthen such ethical commitments across disciplinary institutions—from the revision of APSA guidelines, the submission and review policies of journals, and the expectations of our academic departments. Of particular urgency are the incorporation of ethical training in doctoral programs and the endorsement by APSA leadership of more high-profile activities at association events. Lives and livelihoods of our participants are at stake, as well as methodological pluralism and access to top journals by colleagues at less-resourced institutions.

Power and Institutionalization (Working Group I.3)

—Rachel Beatty Riedl, Ekrem Karakoç, and Tim Büthe

Working group I.3 was established to consider the advantages and disadvantages of different ways of fostering research explicitness.Footnote 121 We sought to foster discussion of questions such as what are the key differences between distinct ways of institutionalizing research explicitness? For what kinds of challenges are different models of institutionalization (including social norms, explicit standards, and mandatory rules) best suited? And in particular: How do different institutional modes for advancing research explicitness interact with power and resource differentials between scholars at different career stages, undertaking different kinds of work, or located at different kinds of educational institutions? Working group I.3 was also called upon to consider the appropriate role of particular institutional actors in promoting (or possibly “enforcing”) scholarly norms of research explicitness—in particular editors and reviewers, IRBs, and funding agencies.

Our report addresses these questions by setting out four ideal-typical mode(l)s of institutionalizing research explicitness, which differ regarding key elements of institutionalization:

  • strictly voluntary individual practices without institutionalization

  • social norms with individual responsibility for implementation

  • standards with variable incentives for adoption and (possibly) decentralized enforcement

  • rules: obligatory prescriptions with centralized enforcement

Based on the deliberations that took place on the various threads of our working group’s online forum and elsewhere on QTD website, as well as numerous bilateral and group discussions that we have undertaken via e-mail and in person with a highly diverse set of colleagues, we see considerable support for (something like) each of these models, but no one model has overwhelming support among scholars of politics.

Our report therefore focuses on two tasks: (1) We clarify the dimensions on which these alternative approaches to institutionalization of norms for the explication of one’s research methods differ. Here, we focus on codification, responsibility for implementation, and incentivization/enforcement. (2) We spell out key implications (pros and cons) of the alternative approaches, to allow for a more informed debate and decision-making by particular research communities, individual scholars as authors and reviewers, editors, and funding agency officers.

To consider how different ways of institutionalizing explicitness interact with power structures as well as other forms of inequality among scholars, we begin by identifying the differences that may result in inequities, about which scholars who participated in the QTD have expressed concerns. They include:

  • seniority and rank (from graduate student to adjunct to tenure-track/tenured)

  • type of institution (from community colleges to major research universities)

  • epistemological tradition

  • methodological approach

  • gender

  • under-represented minority status (resulting in barriers to networks, resources, and expression of social norms that may vary among different communities)

  • geographic locations (including “domestic” scholars, who are socialized to meet the cultural expectation of the dominant—mostly U.S.—communities of reviewers for the leading journals, versus international/foreign scholars, especially from developing countries)

  • availability of funding for producing and disseminating data (including conditional versus contractually guaranteed unconstrained funding; short-term/uncertain versus long-term/sustained)

  • research environment (including security concerns for researcher or research subjects).

We then examine how the different characteristics of the four ideal-typical mode(l)s of institutionalization interact with resource inequalities and power hierarchies—to create highly unequal incentives and constraints. For instance, the burden of full research explicitness can be prohibitively high for less well-resourced scholars, exacerbating their disadvantage in scholarly work. Such implications, moreover, might be more serious at some stages of the research process than at others—given that different stages of the project life cycle are associated with different forms of explicitness (production transparency, analytical transparency, post-analysis data sharing).

At the same time, the relationship between any particular kind of inequality and the institutionalization of research explicitness is often complex. Many colleagues are concerned, for instance, that more demanding requirements by the leading journals and funding agencies act like barriers to entry, especially for younger and less well-resourced scholars. Such concerns should be taken very seriously, but it also should be noted that explicit standards make it easier for newcomers to join a given research community. Put another way, highly implicit social norms can also be very exclusionary—all the more so when those norms are highly effective in shaping insiders’ expectations (norms can be powerful, even if such power is not exercised by any particular person).

Throughout, our report also considers the special role(s) of journal editors and reviewers, funding agency program officers, and institutional review boards (IRBs), as distinct institutional nodes of power that shape the larger context for research explicitness. We recognize—as many colleagues do—that these actors have rights and indeed obligations to uphold high standards of research integrity (or more narrowly, research ethics in the case of IRBs), which might warrant articulating standards (and maybe even setting and enforcing rules) for research explicitness that go beyond the social norms that are widely agreed upon across a broad range of political science research communities. At the same time, with such power to set the rules comes the responsibility to be attentive to the potential for certain rules to exacerbate social, political, and financial inequality, marginalization, and exclusion (and to minimize such adverse side-effects). Generalist association journals in particular ought to avoid requirements that in effect marginalize or exclude certain research communities of the association. We lay out the trade-offs inherent in the four ideal-typical models of research explicitness in our working group’s report in an endeavor to support inclusion and research diversity.

Text-Based Sources (Working Group II.1)

—Nikhar Gaikwad, Veronica Herrera, and Robert Mickey

Scope of the Report

Recent discussions about transparency in political science have become fraught with concerns over replicability or even scholarly misconduct. The report of the QTD Working Group on Text-Based Sources emphasizes instead that the ultimate goal of augmenting transparency is to increase our ability to evaluate evidentiary claims, build on prior research, and produce better knowledge. Accordingly, this report reviews the complex issues raised by pursuing these goals through rendering more transparent qualitative research that employs text-based sources. We interpret “text-based sources” broadly to include a range of documents, from those found in state archives or the collections of parties and social movements to diaries, news media, and secondary sources. Text-based sources may be other media as well, such as photographs, transcriptions of radio or television broadcasts, videos, or websites. Moreover, our report applies to text-based portions of all social-scientific research, no matter what (combination of) methods they might employ in their empirical work. Drawing on QTD deliberations, existing scholarly work, and our own reflections, we discuss a range of transparency-enhancing practices and technologies, the costs and risks attendant with each, and their potential benefits. We close with a set of recommendations, ranked from least to most controversial.

As a preliminary step, we note that many discussions of transparency-enhancing initiatives focus on the costs of change without reflecting enough on the status quo. As a corrective, we use one of our own works to illustrate that current practices among qualitative researchers are often deficient. Our analysis emphasizes that a better appreciation of where things stand now is vital in thinking through the benefits and costs of enhanced transparency.

Forms and Benefits of Transparency

The report describes several types of transparency-enhancing practices relevant to text-based sources. Some of these practices improve transparency regarding the process of generating evidence. Clearly identifying a source’s location helps other researchers locate and evaluate evidence, expanding the scope and reach of one’s research. Additionally, it is important to convey clearly how text-based sources are produced—by which actors, and for what purposes? Understanding the process by which, say, an archive came to house certain documents, but not others, is incredibly valuable. We also discuss the benefits of greater clarity concerning how researchers select sources for analysis. Perhaps most importantly, readers would benefit from greater transparency about the analytic process: How do authors think the cited sources help sustain analytical, descriptive, or causal claims? Finally, data sharing—in the form of substantial quotation or reproduction—can help readers better evaluate authors’ claims by allowing them to encounter the language appearing in these texts.

We next consider technologies that can enhance transparency. These range from the “meaty footnote” (often appearing in a “methodological appendix” or “narrative” to the active citations approach, to the “Transparency Appendix” (or TRAX), and to the “Annotation for Transparent Inquiry” (ATI) method. Because the objective of transparency is for others to be able to critically evaluate key claims, we encourage researchers to consider applying these standards, where appropriate and feasible, not for every single sentence in a scholarly work but rather for those that undergird analytical, descriptive, or causal claims central to a scholar’s main argument.

Costs, Risks, and Limitations

The report relies heavily on the QTD deliberations in discussing the costs, risks, and limitations that researchers may face in enhancing research transparency. These problems involve financial and temporal costs, constraints imposed by copyright law, etc. In particular, the report discusses how some costs are more problematic for junior scholars and those with less research support, and for scholars working with foreign sources who might be expected to provide translations in order to publish their findings in English-language journals. In addition, we point out ethical concerns that emerge with some modes of transparency, such as the sharing of sensitive archival materials. While acknowledging that these costs and risks are real, we consider ways by which they can be ameliorated.

Recommendations

The report closes with several recommendations for enhancing transparency. These are ordered from least costly, minimally disruptive, and uncontroversial, to practices likely to be very costly and controversial. Practices with a great deal of support in the QTD deliberations include (1) greater explicitness about a source’s location, for example, requiring that footnotes or endnotes include page numbers unless the writer actually refers to the cited work’s entire argument, as well as (2) higher word-count limits for journal articles, especially for footnotes, endnotes, and citations. More demanding, costly, and therefore controversial are recommendations that (3) researchers provide analytic notes that explain how sources back up key claims. We note existing and emerging technologies that facilitate these practices. Lastly, we recommend (4) more demanding practices that promote production transparency and, where appropriate and feasible, data access, and discuss examples of ways in which this might be achieved. These include TRAXs and ATIs, with or without sharing of text-based evidence.

There are tradeoffs involved in increasing transparency, and we raise awareness that these tradeoffs are more costly for some researchers than others. But we also think the goal of enhanced transparency is one the discipline should pursue. That said, if the adoption of onerous requirements by publication outlets were to discourage qualitative research, our discipline would be much worse off. We thus deemphasize perhaps the most controversial component of transparency, the mandatory sharing of sources. In our view, scholars should be able to choose whether (and which) sources to share, and in what format.

In sum, we think that many—maybe even most—qualitative researchers would agree that, if the costs are not too onerous, some of the broad components of transparency described in this report are beneficial and implementable. We think it is productive for scholars—as researchers, reviewers, and publication gatekeepers—to think through the goals of research transparency, the status quo in transparency practices in their knowledge communities, and the benefits they can gain by pursuing this goal.

Evidence from Researcher Interactions with Human Participants (Working Group II.2)

—Anastasia Shesterinina, Mark A. Pollack, and Leonardo R. Arriola

Scope of the Report

This document summarizes the Community Transparency Statement of Working Group II.2—Evidence from Researcher Interactions with Human Participants. We examine how transparency is understood by scholars who regularly engage with human subjects, assess the benefits and costs of transparency practices, and present practical recommendations for researchers, editors, reviewers, and funders. Our findings draw on contributions posted to the Qualitative Transparency Deliberations (QTD) online forum, offline consultations with scholars from across the discipline, and related published materials.

We find broad support for the principle of transparency among scholars working with human research participants, but our consultations also make clear that the meaning of transparency should be understood as part of research integrity writ large. The scholars we consulted were nearly unanimous in emphasizing the importance of openness and explicitness—e.g., by specifying how information from human subjects research is collected and analyzed or interpreted—for the integrity of the research enterprise. Transparency requirements must be weighed against the ethical obligation to protect human subjects, the epistemological diversity within the discipline, and the workload imposed on researchers using qualitative data.

Forms and Benefits of Research Transparency

Working Group II.2 examined how transparency is understood by scholars in terms of data access, production transparency, and analytic transparency.

Production transparency: Accurately reporting the process by which evidentiary material is generated remains a core aspect of transparency across research traditions. Many of the scholars consulted expressed support for this aspect of transparency, provided that it is interpreted broadly to mean that scholars report on research processes (e.g., identification and recruitment of participants, where this information does not compromise the security of their subjects), as well as reporting scholar reflexivity and ethical dilemmas.

Analytic transparency: Political scientists generally support analytic transparency, namely, providing a clear account of how conclusions are drawn from data. Scholars see such analytic transparency as enabling better assessments of evidence from different research traditions and as guarding against bias. The benefits of facilitating replication and discouraging dishonesty are also acknowledged, but considered of secondary importance and applicable to some and not other research traditions. Accordingly, many argue for recognizing that the most appropriate way to document analytic processes is often specific to particular epistemic communities.

Data access: Transparency discussions largely focus on making data available for evaluation or replication. Scholars working with human subjects, however, need a more flexible conceptualization that recognizes epistemological diversity and ethical imperatives. Rather than submitting interview transcripts or field notes, scholars could provide extended excerpts or detailed description of procedures used to collect and analyze data (refer to the production and analytic transparency passages), where ethically appropriate. The primary benefit of an expanded notion of data access would be to make findings from different approaches understandable to a broader research community. Some scholars also recognize benefits from facilitating replication and preventing dishonesty, but there is considerable disagreement regarding such outcomes.

Costs, Risks, and Limitations

Working Group II.2 deliberations identified five areas of concern: human subject protection, access to human subjects, effort and time, power differentials, and epistemological diversity.

Human subjects protection: Researchers are primarily concerned by the potential dangers that transparency requirements can pose to human research participants. The sharing of anonymized or partially redacted interview transcripts or field notes could result in the unintentional violation of confidentiality. Descriptions of sampling techniques or characterizations of the pool of interviewees could also inadvertently reveal individual identities. Such concerns are most acute for researchers working among vulnerable populations, including ethnic minorities, sexual minorities, citizens of authoritarian regimes, and those living in conflict zones, especially victims of violence and repression.

Access to human subjects: Excessive transparency requirements might undermine the trust established with research participants, and endanger future access to populations of all types who might perceive a weakening of confidentiality. This concern was expressed with respect to requirements to grant access to interview transcripts or field notes as a condition of publication or funding support. Requiring the sharing of such documentation could also unintentionally introduce bias by driving away potential participants willing to express unpopular or unofficial positions.

Effort, time, and resources: Providing access to detailed accounts of how human subjects data are generated and analyzed may impose undue costs. Preparing and assembling qualitative appendices may produce a burdensome level of work not required of scholars using other methods without the participation of human subjects.

Exacerbating power differentials: Labor-intensive transparency requirements may fall most heavily on less established scholars or researchers at underfunded institutions, placing them at a disadvantage when publishing. A related concern involves the scholar’s intellectual property. The common requirement that underlying data be publicly shared within one year may be insufficient to allow scholars to make full use of data collected through time-intensive fieldwork.

Transparency standards and diversity: Editorial insistence on transparency may limit diversity in the discipline by holding qualitative researchers to a different standard and thus marginalizing researchers working in epistemological and ontological traditions incompatible with codified transparency standards.

Recommendations

Working Group II.2 identified a number of practices for researchers to achieve meaningful transparency. We highlight here a few choice examples from the broader range of available tools. These tools, however, should only be used or requested when ethically, epistemologically, and practically appropriate on a case-by-case basis.

In-article transparency discussion: The most obvious way to be transparent about research is to explain the process of gathering empirical information and the analytical process in detail.

Footnotes: Footnotes should be used to provide essential additional information on methodology, including data collection, and to support analytical claims.

Transparency appendices: Online appendices provide space to expand on methodology, fieldwork logistics, interview protocols or excerpts, and analysis procedures.

Discussion of reflexivity: Researchers can enhance transparency by explicitly discussing how their position vis-à-vis research participants affected the process of collecting and analyzing or interpreting data.

Active citation and innovations for data collection: Hyperlinked citations and innovations in data collection, such as video collections, provide new ways for sharing data if implemented with attention to human subjects and copyright concerns.

Comparative Methods and Process Tracing (Working Group III.1)

—Andrew Bennett, Tasha Fairfield, and Hillel David Soifer

Scope of the Report

Process tracing is a within-case method of drawing inferences from evidence in a case to theories about hypothesized causal mechanisms that might explain the outcome of that case. The comparative methods in this report include comparisons among small numbers of case studies that use process tracing. The report discusses four approaches to process tracing: traditional narrative-based analysis, Van Evera’s analytic tests, Bayesian process tracing, and process tracing that aspires to the relatively complete elucidation of causal mechanisms.

This report focuses on analytic transparency, leaving transparency in generating and sharing evidence to other QTD reports. As recent methodological advances in process tracing have been rapid and are the subject of ongoing debates, the report differentiates between core recommended practices and emerging practices that might be considered, depending on the costs entailed, by authors, reviewers, and readers. For each research practice, the full report provides exemplars.

Forms and Benefits of Research Transparency

Core Recommended Practices: The report identifies seven recommended practices, many of which are common to a variety of methodological approaches. These include:

  1. 1. Clearly define concepts and describe how they have been operationalized and scored across cases, so that readers can understand justifications for case selection as well as the background conditions and the context in which the causal relationship is theorized to hold.

  2. 2. Present the rationale for case selection and the logic of comparison so that readers can evaluate choices made by the researcher and assess how compelling any claims of generalizability might be.

  3. 3. Clearly articulate the causal argument, including a discussion of both mechanisms and scope conditions.

  4. 4. Identify and assess salient alternative explanations, so that readers can understand how the argument proposed builds on or challenges existing hypotheses.

  5. 5. Explain how the empirical evidence leads to a given inference by identifying how the evidence was collected and interpreted, and why and to what extent the evidence supports the argument or undermines rival explanations. As part of this process, authors should address any consequential evidence that runs counter to their overall conclusions.

  6. 6. Identify and discuss background knowledge that plays a central role in how the evidence is interpreted, which is critical for building consensus on inferences, or helping scholars pinpoint sources of disagreement.

  7. 7. Present key pieces of evidence in their original form where feasible, so that readers have the opportunity to assess whether the author’s interpretations and inferences are convincing.

As these practices are relatively uncontroversial, the report’s focus is on emerging practices that have engendered debate.

Emerging practices: The report focuses on two sets of emerging practices: transparency on case selection, and transparency on analytic processes involving background knowledge and the links between evidence and inferences in process tracing.

Transparency on Case Selection: In addition to the standard practice of identifying the rationale for case selection and the form of comparison used (most- and least-similar case comparisons, pathway cases, etc.) the report discusses two low-cost emerging practices: 1) elaborating on the information used in selecting cases versus information learned later, and, 2) identifying the cases almost selected for study.

There are differing views on the value of specifying what information was known at the time of case selection. Some see this information as useful in assessing whether a researcher engaged in selection bias and argue that scholars should keep track of and report temporal details of decision-making when case selection is an iterative process that evolves over the course of research. Some Bayesian researchers argue instead that relative timing of what was known when is irrelevant; what matters for making and evaluating inferences is simply the evidence uncovered from the cases and any relevant background knowledge.

There is more consensus that identifying the cases almost chosen for study and giving pragmatic or methodological reasons they were not chosen might be useful for pre-empting critiques from reviewers and alerting other researchers to cases they might study. While there are potential costs in terms of word limits, greater transparency on case selection can be kept brief or included in appendices.

Analytic transparency: Emerging practices include the application of Van Evera’s tests, Bayesian process tracing, and process tracing using particular philosophical understandings of causal mechanisms.

Van Evera’s test types are based on the researcher’s expectations about the degree to which a theory is unique and certain in anticipating evidence in a case. From four combinations of high versus low uniqueness and certitude, Van Evera outlines four tests: smoking gun, hoop, straw in the wind, and doubly decisive tests. The test type indicates whether finding or not finding a given piece of evidence strongly or weakly supports or undermines a hypothesis.

While Van Evera’s tests are in some senses intuitively or informally Bayesian, fully Bayesian approaches to process tracing entail using prior knowledge to assess how much confidence we initially hold in a given hypothesis relative to rivals and updating our views about which hypothesis provides the best explanation as we gather evidence. This latter step involves evaluating likelihood ratios, which reflect judgments about which hypothesis makes the evidence more plausible.

The most fully developed mechanism-focused approach, outlined by University of Virginia political scientist David Waldner in his work on the “completeness standard,” aims at a high level of explanatory completeness. Here, explanatory accounts may be viewed as more transparent to the extent that they outline a causal graph, provide an event history map linking evidence from the case to each node in the causal graph, theorize about the causal mechanisms that link the nodes in the causal graph, and eliminate rival explanations by evidentiary tests or by showing that their causal graph, event history map, or theorizing are inadequate.

Any of these emerging practices provides increased analytic transparency over traditional and less formal process tracing. This is particularly useful when authors and reviewers disagree on inferences; formal Bayesian analysis is particularly well-suited for identifying whether these disagreements are rooted in differences in priors and background knowledge, in likelihood ratios, or in the reading of the evidence itself. Bayesianism also provides a mathematical framework for assessing how sensitive conclusions are to judgments made at each step of the inferential process.

Costs, Risks, and Limitations

The report focuses in particular on the costs involved in formalizing process tracing, which increases the time, effort, and training required to conduct, read, and evaluate scholarship. For explicit Bayesian analysis, there are inherent limitations on unambiguously quantifying probabilities (e.g., for rare events) and practical difficulties when handling large amounts of complex evidence and multiple nuanced hypotheses. In mechanistic approaches, the more detailed the specification of the hypothesized mechanisms, the more voluminous the evidence needed to instantiate each step in the theory, and the greater the amount of evidence about which an author needs to be transparent. These requirements put pressure on word limits, although some of the analysis can be relegated to appendices.

Recommendations

We urge continued use of the core recommended practices discussed here. Additionally, we offer recommendations on the emerging practices discussed in the previous section.

  1. 1. We encourage researchers to try out emerging practices in analytically transparent process tracing. Van Evera’s tests, formal Bayesianism, and mechanism-focused process tracing offer opportunities for considerable improvement in analytic transparency, albeit with substantial effort.

  2. 2. We caution against making emerging practices in analytically transparent process tracing a norm for publication. Readable case narratives retain a vital role in making our research broadly comprehensible to a wide audience.

  3. 3. Given that methodological literatures on more ambitious and formal approaches to process tracing are still in their infancy, making definitive best-practice recommendations, let alone imposing standards for how these approaches should be implemented in empirical work, is premature. Methodologists must continue experimenting with and working toward consensus on best practices, and opportunities should be provided for both authors and reviewers to receive training in these techniques.

Interpretive Methods (Working Group III.2)

—Lisa Björkman, Lisa Wedeen, Juliet Williams, and Mary Hawkesworth

The transparency initiative in political science arose in part as a result of scholars’ frustration in attempting to replicate findings published in leading journals. This concern with replicability, a problem especially potent among positivist approaches to empirical inquiry, rests on important assumptions that are by no means shared by everyone in the discipline. It presupposes that evidence-based research necessarily involves what is often termed the “extraction” of objective data; that all data can be identically reproduced by other scholars; and that evidentiary material can be analyzed neutrally, such that the very same findings can be arrived at readily by any scholar who repeats the research. Some kinds of work (e.g., studies of roll call voting) may be amenable to this orientation towards knowledge production, but many forms of valuable research are not.

Knowledge production of all sorts involves deliberative processes that require individual and collaborative efforts to assess the merits of contending views. In acknowledging that nothing is manifest or self-evident, scholars obligate themselves to attend to the theoretical frameworks that variously construct and accredit evidence within particular research practices. Knowing how particular theories organize perception and construe relevant evidence is crucial to evaluating evidentiary claims. In contesting the parameters of debate within and across academic disciplines, scholars interrogate existing categories, question how boundaries have been drawn between one phenomenon and another, challenge the “operationalization” of terms, probe omissions and distortions, examine metaphors and analogies that structure understanding, develop new concepts, introduce novel modes of argument, and appeal to different registers of experience. This rich intellectual exchange, central to interpretive social science but certainly not exclusive to it, cannot be captured by the notion of transparency currently promoted by DA-RT advocates.

Moreover, an array of work in interpretive social science specifically has criticized fact/value distinctions, questioned the neutrality of perception, detailed the historically changing meanings of the very term objectivity, highlighted political science’s complicity with projects of empire, and illuminated how replication studies can cause racial, ethnic, and gender bias to become entrenched. The term replication likewise raises questions of general import—about what replicability tells us about the phenomenon being researched, how researchers’ priors inform their construction of evidence, and the added value that comes from considering alternative understandings of replication.

In this sense, interpretivists of various stripes are interested in replication in the broader sense of repetition. Interpretive social science often focuses on the ways in which social meanings are reiterated and power is reproduced, with some interpretivists emphasizing how social processes are variously placed at risk by iteration. Social conventions are by definition iterative—and interpretive social scientists can and do make generalizations as a result of observing and analyzing this repetition. Moreover, in being attuned to the politics of representation, interpretive scholars are able to analyze how concepts, definitions, measurements, and methods—the means of generating knowledge about the political world—are themselves data, which is to say, structured by power and laden with social value. In our view, there is no such thing as value-free social science.

Rather than engaging such substantive issues, the current transparency initiative ignores them and threatens, in the name of neutrality, to impose a biased hierarchy of worth on the rich and varied research agendas within political science.

While the transparency initiative rides roughshod over the diverse communities of argument that make up our discipline, it also impoverishes our thinking by insisting on transparency as an apolitical value. DA-RT’s purported commitments to openness and transparency are symptomatic of an ethical and political problem that comes from refusing to confront the inherently political nature of knowledge production. In a “post-truth” era when the cynical manipulation of facts has become the new normal (as evidenced by the Oxford English Dictionary’s selection of “post-truth” as the international word for 2016), the insistence upon research transparency could be construed as an effort to manage epistemic insecurity. But an appeal to an overly simplistic conception of truth is inadequate to that task.

Despite claims to value neutrality, DA-RT actually reinstantiates a very particular notion of social science: as dealing fundamentally with falsifiable claims, as treating truth and consensus as synonymous, and as dismissing the ethical, political, and conceptual questions posed by interpretive scholars. DA-RT takes the latter to be problems to be ignored, as opposed to indications of important abiding tensions and contradictions in the practice of social science.

Perhaps most consequentially, under the terms being used by DA-RT, qualitative research in general and interpretive approaches in particular are defined a priori as unable to live up to the requirements of transparency, which has the effect of marginalizing such research in the profession. For example, the transparency norms established by JETS (the Journal Editors’ Transparency Statement inspired by DA-RT) impose new burdens on scholars pursuing scarce publication and funding opportunities, a startling reversal of how the concept is used in democratic theory, where transparency is understood as a constraint on the powerful, not a condition they impose. If there is a problem with transparency in the profession, given the rigors of the peer-review process, we conclude that it lies not in the unwillingness of authors to make their data known, but rather in the implicit standards that journal editors and anonymous peer reviewers use to define meritorious work.

Like all universalisms in practice, the aspiration to data transparency in the discipline speaks to a desire for homogeneity, or at least to the desirability of transcending difference. And like all universalisms, it is shot through with hierarchy, exclusions, and contradictions. For example, the demand for transparency perpetuates the false assumption that non-positivist approaches to knowledge production within the discipline lack the standards necessary to ensure research integrity. At the same time, DA-RT suggests that the problem with replicability will be solved by transparency, while there is little reason to believe that DA-RT will provide a panacea or that the discipline is in need of one. Rather than defending a set of one-size-fits all transparency norms, the APSA should cultivate curiosity in the various philosophical and methodological traditions that underpin the study of politics. Doing so would necessitate the embrace of a plurality of standards for research excellence, ones carefully suited to the diverse methods employed within the profession.

Ethnography and Participant Observation (Working Group III.3)

—Jillian Schwedler, Erica S. Simmons, and Nicholas Rush Smith

Scope of the Report

Although ethnography involves multiple research techniques, ethnography’s core activity is participant-observation. This usually involves immersion in a research site—often understood with respect to a particular location, community, institution, or category of practice. Improvisation is a key technique, as interactions in the field create unanticipated ethical dilemmas, reshape hypotheses, and alter the questions that researchers pursue. The likelihood that research goals will change in the field presents complications for research openness, including how data sharing might upend ethical obligations to subjects.

Forms and Benefits of Research Openness

Although ethnographers do not share a single understanding of research openness, they often engage in multiple such practices. For instance, many describe how they gained access to field sites and what roadblocks prevented other potential research paths. Ethnographers generally discuss their positionality—how one’s gender, racial or ethnic background, class position, or nationality, and enmeshment in webs of power might shape the kinds of insights they produce. They often share their daily routines in the field, including how they interacted with interlocutors. They may describe how their subjectivity—including the emotional strains, challenges, or dangers they or their interlocutors experienced during fieldwork—impacted their research. They may also discuss elements of their analytic process, including their prior theoretical assumptions and how fieldwork changed them.

In addition to practices of openness aimed at the scholarly community, ethnographers have obligations to be open to interlocutors in field sites. Such openness toward research participants generally includes ethical imperatives to share the goals of the research, potential risks of participation, how researchers will handle data, and sources of funding. Ethnographers typically report on these interactions in their published writing, including choices for anonymization, data protection, data destruction, and why data may not be available for sharing.

Costs, Risks, and Limitations

Ethnographers face several challenges making their research process visible to others. One challenge is that the positivist concepts of transparency sit awkwardly against interpretive ethnography because they grate against the improvisation necessary to ethnographic practice. For data sharing, several limitations, costs, and risks arise. Epistemologically, field notes generally cannot be used as raw “data” by others because written notes are highly contextual and interpreted in light of an ethnographer’s “headnotes.” Ethically, sharing field notes can harm subjects because ethnographers may record potentially damaging information in notes, which they decide to leave out of published work. Practically, the informed consent procedures which would be required to share field notes may lead subjects to decline to participate because ethnographic research is based on trust, which is difficult to extend beyond the relationships developed during ethnographic research. More generally, institutionalized data sharing may have the unintended consequence of pushing researchers to avoid risky topics that might put them or their subjects at risk.

Recommendations

We advance three sets of recommendations for editors, grounded in the view that procedures for research openness for ethnographers must be highly contextual and depend on the researcher, the question, the field site, and the methodological commitments. First, we suggest editors seek two categories of reviewers: (1) skilled ethnographers and (2) scholars who know the researcher’s field site(s). Second, we suggest that editors allow space for ethnographers to be open to the extent ethically possible about everything from research design, to interactions in the field, to analytical processes. Even then, scholars should not be forced to report all research activities in a single article or appendix (if that is even possible), as each project has widely varying ethical obligations. Third, published work might be best considered not as “100% self-contained” but instead have methodological issues or insights elaborated over a series of publications.

Set-Analytic Approaches, Especially Qualitative Comparative Analysis (QCA) (Working Group III.4)

—Kendra Koivu, Carsten Q. Schneider, and Barbara Vis

Scope of the Report

Our report focuses on set-analytic approaches that use algorithms and computer software for parts of their analysis, particularly Qualitative Comparative Analysis (QCA). We concentrate on transparency concerning the “analytic moment” that stretches from assigning membership scores of cases in the condition and outcome sets to the presentation and interpretation of the results obtained via the truth table’s logical minimization.Footnote 122 We do not address transparency issues related to the research processes prior to and after this analytic moment as they are not specific to set-analytic approaches and are covered by other QTD working groups, such as those on research ethics, text-based sources, and non-automated content analysis.

Forms and Benefits of Research Transparency

Given our focus on set-analytic approaches as data analysis techniques, we discuss transparency measures regarding the analytic process and data. These measures have five main benefits that foster the clarity of the analysis: enhancing the interpretability of the study’s findings, allowing replication, improving the clarity of communication, improving understanding of QCA as a method in the discipline, and aiding teaching of and innovation in QCA. Most measures also enable replication. We emphasize six steps in the analysis about which QCA researchers should be transparent.

First, researchers should be transparent about how they transform their “raw” data into the membership scores of each case in the condition and outcome sets. Raw data can be any form of information about cases—from interviews to archival data or standardized indices; and sets can be crisp, fuzzy, or multi-value. Set calibration is typically an iterative process, and researchers need to explicate and justify what information they used, which qualitative anchors they have chosen, which form of calibration they employed, and on which software they relied. Both the raw and the calibrated data should be made available.

The next step is to conduct tests of necessity. Researchers need to discuss why a specific set is postulated as necessary for the outcome, instead of the other minimal supersets of that outcome that also pass the researchers’ empirical criteria for consistency and relevance. The latter choices, the thresholds for consistency and empirical relevance, also need to be explicated.

Step 3 involves representing the calibrated data in a truth table. For each truth table row, researchers should state what led to designate it as being sufficient for the outcome, not sufficient, or a logical remainder. This requires that researchers reveal the thresholds imposed regarding each row’s consistency and case frequency. If further criteria have been used – such as the PRI-score (Proportional Reduction in Inconsistency)—researchers should explicate them, too.

Once the truth table is constructed, step 4 is that researchers summarize the information contained in the truth table via logical minimization. This requires that researchers reveal how they handle logical remainders, and if they allow only “easy” counterfactuals (intermediate solution), researchers should report their directional expectations. If model ambiguity arises—that is, if more than one solution formula can be obtained—it should be reported. Researchers should also state which software package they used for logical minimization.

Once all necessary and sufficient conditions have been identified, researchers need to clearly present and interpret their findings (step 5). This includes representation in the form of Boolean expressions, with all relevant parameters of fit. Labels should be displayed—best in a separate table—for cases that are typical for each sufficient term, that deviate from the broader picture, and that remain unexplained. Sometimes, the solution formula not only reveals conjunctions of sufficient conditions, but also fulfills the empirical criteria for being interpreted as a disjunction that is necessary for the outcome; if so, this should be discussed as well. Relatedly, researchers should explicitly state whether their goal in using QCA is descriptive or causal inference.

Researchers should indicate in the text—and more extensively in an online appendix—which robustness tests have been performed (step 6) and what they reveal about the main findings and the analytic choices made during the analytic moment: calibration, thresholds for parameters of fit, and treatment of logical remainders. Additionally, robustness against equally plausible case- and condition-selection decisions should be performed.

Costs, Risks, and Limitations

The transparency measures discussed here are of relatively low costs and do not entail any real risks. The costs involve time and space needed for reporting all transparency measures. One risk consists of over-reporting, that is, inundating the reader with information such that clarity is hampered rather than fostered by transparency. Researchers therefore must guide the reader.

Recommendations

We recommend that QCA-researchers provide full information, as indicated earlier, about the (1) process of set-calibration, (2) tests of necessity, (3) construction of the truth table, (4) procedure of logical minimization, (5) details and interpretation of the findings, and (6) robustness tests performed. These are relatively low-cost measures that enhance the clarity of the analysis. In addition, in our report, we also recommend that QCA-researchers:

  1. 1. Provide the script of the analysis if using command-line software such as R, or screenshots if using graphical user interface software.

  2. 2. Report the software package used, including the version number.

  3. 3. Account for “going back and forth between theory and evidence” by discussing, for example, initial theoretical hunches, how the data altered these, whether and why selected cases were changed, and whether and why initial conditions were dropped or otherwise altered.

We advise cautious and selective use on the following practices:

  1. 4. When the data used are qualitative, we advise that researchers be aware of ethical, proprietary, and logistical concerns involved in sharing those data (see QTD working group reports on forms of evidence, sensitive contexts, and research ethics).

In our report, finally, we counsel against the following practices:

  1. 5. Lengthy appendices without a summary in the main text. Appendices should be curated appropriately.

  2. 6. Presenting research as hypothesis-testing when it followed the “dialogue between theory and evidence” approach.

  3. 7. Providing a substantive interpretation of all QCA solution formulas, unless the explicit goal is comparing them.

Non-Automated Content Analysis (Working Group III.5)

—Zachary Elkins, Scott Spitzer, and Jonas Tallberg

Scope of the Report

Working Group III.5 considered issues of research transparency in the manual collection and content analysis of texts, audio, and visual materials. Our report is based on the authors’ research experience, comments of those participating online in the QTD discussion board on this topic, and direct communications with colleagues.

Forms and Benefits of Research Transparency

A principal contribution of the report is the conceptualization and evaluation of the various forms that research transparency might take in this methodological domain. By forms, we mean the various kinds of research materials or products that scholars might choose to disseminate. The report identifies nine types of such materials, which vary with respect to the stage of the analysis, burden on the researcher, benefits to the research community, and risks:

  1. 1. Raw (primary) source material

  2. 2. Bibliographic references to the source material

  3. 3. Sampling plans

  4. 4. Commentary and deliberative process notes regarding coding decisions

  5. 5. “Chapter/verse” references for each coding decision

  6. 6. Data codebooks

  7. 7. Coded data

  8. 8. Estimates of reliability

  9. 9. Concept mapping (glossary/ontology).

Many of these forms of transparency are self-explanatory, and several may be easily achieved by sharing research products that most content analysts are already producing. Some of these materials, however, are not yet widely familiar. They are included to inspire further exploration by researchers and to anticipate evolving technologies. Concept mapping, for example, refers to an emerging set of information technologies for documenting the set of concepts, terms, definitions, and related terms that underlie empirical observations. Such formalizations of concepts are helpful for understanding theoretical motivations as well as for connecting and surfacing data in online search platforms.

The expected benefits of these forms of research transparency in the content-analytic domain are not fundamentally different from those of other domains. Generally, increased transparency can lead to increased research integrity and accessibility. The report describes these benefits in the particular case of content-analytic research, in which research transparency across the nine forms listed can facilitate the interpretability and evaluability of the findings, the replicability of the analysis, and encourage spillover contributions to the collective good. While many of these benefits accrue collectively to the scholarly community, many of them redound directly to the individual researcher in terms of increased credibility and impact of his or her research.

Costs, Risks, and Limitations

Our report also elaborates the potential risks, costs, and constraints for content-analytic researchers who choose to make their research products available. The report identifies and describes risks in five principal categories: (1) the protection of researchers’ intellectual property rights; (2) time and resources; (3) confidentiality; (4) copyright concerns; and (5) “chilling” effects on data collection.

Importantly, the report notes two related but distinct constituencies for the research products: other researchers and journal editors, publishers, and funding agencies. Demands from these constituencies create different incentives and constraints for researchers.

Recommendations

Non-automated content-analytic research is at something of a crossroads. Dramatic improvements in information technology have enabled researchers to make their data and procedures widely and deeply available to the research community and the public. Commensurately, the demand for, and expectations of, access to such materials has grown. Scholars understandably face some challenges in meeting these demands and expectations, which they sometimes have to trade off against other priorities.

In the area of content analysis, these issues may be especially acute. Digital technology means that boxes of material, which were otherwise inaccessible, can now be shared more easily. Apart from these primary source materials, scholars also have the ability to share many of the secondary data resources and documentation in the long channel to their published research. Nevertheless, this domain of research has few established transparency standards thus far. As such, developing common reference points and expectations could be useful.

One clear reference point derives from the increasing demand for replication materials among most journals. The standard appears to be requiring authors to provide access only to the coded data used to generate specific published results. We consider this to be a reasonable baseline standard and see no reason why journals should require access to additional material (e.g., source material, coding decisions, and code books), unless there are concerns about the coded data that require an in-depth look at how it was generated. These sorts of decisions can be most productively negotiated with authors on a case by case basis.

At the same time, vis-à-vis the scholarly community, there may be good reasons to go further, and here current practices are quite diverse, even within political science, and the minimum standard may be insufficient. The report outlines possible alternative transparency standards, discusses pros and cons of these alternatives, and recommends what we consider reasonable principles. In our assessment, there is much to gain for individual researchers and the academic community by expanding access to all main types of material used in non-automated content analysis. We therefore expect that a move toward such access would and should occur naturally, given the incentives for authors to share materials in order to increase the impact and credibility of their research. Accordingly, we stress that such distribution should be voluntary.

Research in Authoritarian and Repressive Contexts (Working Group IV.1)

—Eva Bellin, Sheena Chestnut Greitens, Yoshiko Herrera, and Diane Singerman

Scope of the Report

Authoritarian and repressive contexts pose distinctive challenges for scholars aiming to deliver on the goal of research transparency. In these settings, opinions are not freely exchanged, nor is information easily accessed. Locally based interlocutors often face considerable risks—from harassment or threats to their job or family members to imprisonment, torture, or worse—if they share information that is considered politically sensitive or compromising to the powers-that-be. To generate knowledge, achieve real understanding, and stay true to research ethics, scholars operating in these dangerous and data-poor environments must place a premium on measures that protect interlocutor confidentiality, build networks of trust, clarify contextual meaning, and acknowledge the researcher’s positionality.

Forms and Benefits of Research Transparency

Four kinds of transparency prove especially important for effective research in this context. Transparency, understood this way, makes research in these environments ethically permissible and can provide specific benefits to the discipline.

  • Transparency about risk to human participants/communities: Given that local interlocutors face significant danger when they share sensitive information with outsiders, commitment to strict confidentiality is an inviolable ethical obligation for researchers working in repressive and authoritarian contexts. Researchers must be transparent toward research participants and other interlocutors in authoritarian contexts, as well as resolute about the protocol they will observe to ensure the safety of their interlocutors, both when they are in the field and when they return “back home.” This is essential to sustain the trust necessary to conduct effective research, as well as to uphold the paramount research ethic: protection of human subjects.

  • Transparency about the process of generating evidence: The repressive character of this research environment introduces various biases and distortions into the research process. For example, a repressive environment can constrain the selection of interviewees, the choice of research locations, and even the questions pursued. Transparency about the research process involves being explicit about the procedures adopted and the choices made in collecting data. To enable the research consumer to adjudicate the validity and limitation of the results presented, it is essential to describe the inferential challenges faced, the ethical and security challenges encountered, and the impact these challenges have had on the selection of interviewees, location, choice of questions asked, and interpretation of the data collected.

  • Transparency about researcher positionality: In authoritarian and repressive contexts, where information is not freely shared, the development of networks of personal trust is often essential to accessing data. The researcher’s positionality plays an especially pivotal role in shaping the research process. Transparency about the researcher’s identity and placement in larger power structures is essential to enable the research consumer to evaluate the limitations and validity of the data collected in this environment.

  • Transparency as contextualization: In environments made opaque by repression, people are forced to disguise their true feelings and often resort to code language to convey sensitive views. A pause, hesitation, or even the absence of a response may convey as much as the words that are actually said. To access the true meaning of the data gathered, the seasoned interpreter must give it context. Clarity requires transparency about the research environment and interpretation enabled by the researcher’s deep understanding of that environment. Verbatim reproduction of interview transcripts and other field notes, while they may seem “transparent” to some research consumers, will not deliver true understanding. Only contextualization can achieve that.

Costs, Risks, and Limitations

Transparency is limited by the presence of four major risks when conducting research in authoritarian and repressive environments:

  • The risk of endangering interlocutors: To protect local contacts in this difficult context, researchers must strictly protect the confidentiality of their sources. This ethical obligation comes at the cost of data sharing and transparency in the “evidence-generating” process. That is, it forbids the naming of sources, the publication of interview transcripts, and the sharing of field notes.

  • The risk of generating distrust: To sustain the networks of trust necessary to generate data (as well as protect the safety of interlocutors), researchers must often pursue sensitive issues in an indirect and non-threatening way. This may come at the cost of full transparency of research goals and funding sources.

  • The risk of unpredictability over time: Regimes in repressive and authoritarian contexts often hover in “grey zones,” making it impossible to predict what will be deemed politically sensitive at a given time. In this context of uncertainty, there is no way for interlocutors to provide unconditional “informed consent”; researchers bear an obligation to understand and adjust their protocols to protect interlocutors as the research environment changes. Again, to meet the higher ethic of protecting human subjects, researchers must make compromises on transparency in the evidence-generating process.

  • The risk of making immersive qualitative work prohibitively expensive: Anonymization and redaction of field notes and interview materials have been suggested as methods by which researchers might reconcile the goals of interlocutor protection and analytic replication. But meticulous redaction of data would be extremely burdensome, would likely be insufficient to guarantee the safety of interlocutors, and would also be certain to render much of the data misleading due to its de-contextualization. Besides the concern that no amount of “scrubbing” identifiers would be sufficient to secure true anonymization of this data, this approach would make qualitative work prohibitively expensive, discouraging what is already very challenging research and compromising data generation.

Recommendations

We therefore offer the following five recommendations for the field in producing and evaluating qualitative research that has been carried out in repressive and authoritarian contexts:

  1. 1. Make the research process transparent and subject to evaluation by having researchers attach a short methods appendix to their work that describes the procedures adopted, the choices made in collecting data, and the reasoning behind these choices.

  2. 2. Avoid the publication/distribution of “raw data” (field notes, interview transcripts). The validity of immersive research in authoritarian and repressive contexts is best checked not though “replication” via publication of field notes and interview transcripts, but rather through the production of more research—independent accounts that confirm or contradict prior research. Placing prohibitive barriers on such research only disincentivizes inquiry into authoritarian and repressive contexts and shrinks, rather than enlarges, the possibility of checking the validity of the research.

  3. 3. Reinforce the credibility of research findings derived from non-replicable interviews and field observations by complementing them with other sources of data, both public (speeches, official documents, social media) and non-human (historical data, financial data).

  4. 4. Enhance transparency (in the sense of clarity and understanding) by embracing the value of immersive fieldwork, and affirming its role in the discipline.

  5. 5. Bolster research ethics by providing explicit incentives to protect the confidentiality and safety of interlocutors in the protocols that govern gate-keeping junctures (such as publishing/reviewing and prize awards).

Research in Violent or Post-Conflict Political Settings (Working Group IV.2)

—Ana Arjona, Zachariah Mampilly, and Wendy Pearlman

Scope of the Report

“Political violence research” encompasses research on a range of phenomena and settings in which the use, threat, or legacy of physical coercion imbues struggles over power, resources, and meaning, including violence by organized criminal groups. Political violence research also encompasses research conducted in contexts characterized by conflicts of these sorts. Drawing on feedback from the broader research community, this report discusses the benefits, costs, and risks of adopting various forms of openness and transparency in political violence research and identifies the appropriateness and constraints of different approaches.

Forms and Benefits of Transparency

Transparency toward research participants: This form of transparency includes full disclosure of funding sources and sharing the outcome of our research with our research subjects as well as with more general audiences. It also can include informing subjects about how information about them will be shared, and protecting their identities. To qualitative scholars, this form of transparency is of paramount importance because it pertains to the ethical treatment of subjects. In addition, it could lead to greater trust between researchers and research subjects and hence improve the reliability of data for qualitative and quantitative researchers alike.

Processes of generating evidence: For evidence-based research to be evaluated by others and to contribute to knowledge accumulation, scholars provide information about the process of data collection and the content of such data. Describing in detail different aspects of the methods used to gather evidence allows readers to take into consideration how the process of data collection may affect our conclusions, contributing to others’ ability to evaluate our research and to the accumulation of knowledge.

Data sharing: Different epistemological and methodological positions among scholars of political violence lead to distinct views on what information should be shared, with whom, and how. Overall, however, researchers recognize that increasing data sharing within research communities enables replication and gives readers additional information with which to judge the validity of analyses.

Researcher positionality: To some (especially interpretivist scholars), evaluating data quality requires explicitly addressing a researcher’s positionality. Positionality refers to one’s subjectivity (including race, gender, class, sexuality, nationality, and institutional support, among other aspects) and how it shapes the research process. This form of transparency could also be important for positivist scholars insofar as positionality affects data collection and analysis.

Analytic process: Transparency about how researchers analyze their empirical evidence to yield findings is also important. This includes clearly stating research goals and assumptions, communicating how we have conducted our analyses, and showing how we have arrived at our conclusions. It also includes sharing failures and discussing the ways in which our own subjectivity may have influenced our analysis. This form of transparency would help others assess the validity of our claims and contributions.

Costs, Risks, and Limitations

Risks to human subjects: Data transparency might put research participants in danger, even unintentionally. Political violence researchers have suggested that the publication of full transcripts, even when anonymized, presents risks quite distinct from careful, select quoting of statements from interviews. Those risks apply not only to interviewees themselves, but also to other individuals whom they mention and even some never mentioned explicitly. Most violence researchers are thus wary of across-the-board transparency requirements, and instead advocate deference to the judgment of scholars (who presumably understand particular research contexts) and a norm of erring on the side of caution.

Risks to researchers: Blanket demands for disclosing raw qualitative data might also create unforeseen risks that jeopardize people’s safety in addition to making researchers susceptible to legal action or limiting future access for researchers. Scrubbing transcripts is also problematic as it might be an inordinately burdensome task that can delay publication, have a chilling effect on research, and impose a disproportionate burden on qualitative researchers or those with fewer resources. In addition, one of the appeals of making available “scrubbed” transcripts or interview excerpts is that it allows others to gauge the degree to which scholars “cherry pick” evidence consistent with their hypotheses. Yet sharing select interview transcripts in these ways does not completely resolve this problem.

Risks to research: Demands for data transparency might generate risks for scholarly research itself, undermining knowledge accumulation by discouraging investigation of particular topics or places. Having to release interview transcripts could prevent scholars from conducting certain types of research. In particular, scholars might be less prone to conduct research on vulnerable populations out of fear of putting them at further risk. Alternatively, they may choose to research only powerful actors due to the relative ease of meeting transparency norms.

Recommendations

Though there are no “one-size-fits-all” prescriptions, our assessment of the field yields several basic principles to guide research openness.

  1. 1. Research subject transparency: Researchers can pursue this goal by disclosing how information about research subjects or their testimonies will be shared, discussing researchers’ own funding sources, and sharing the outcomes of research with the communities under study.

  2. 2. Data transparency: Participants in our working group’s deliberations did not support requiring qualitative scholars to share all interview transcripts or all details about interviewees. One suggestion to improve transparency is making the whole response to a question available when quoting from that answer or providing longer excerpts from the interviews that are being quoted. Generally, conflict scholars seem to be in favor of continued creative thinking about ways to increase transparency and counter bias, while abiding by the obligation to do no harm.

  3. 3. Thoughtfulness about implications: Both quantitative and qualitative scholars, whether positivist or interpretivist, should address potential implications of the ways in which sensitive data are gathered.

  4. 4. Analytical transparency: To increase transparency about analytical processes, conflict researchers should more fully explain decisions taken in evaluating and interpreting evidence, articulate the beliefs and assumptions that underlie those decisions, and address how subjectivity and positionality might affect different aspects of the conduct of research. We can do more to disclose hypotheses, cases, or other aspects of a project that failed, discuss how ethnographic projects transformed in the field, or present information that otherwise does not support our theories, and might even contradict them.

Research on Vulnerable and Marginalized Populations (Working Group IV.3)

—Milli Lake, Samantha Majic, and Rahsaan Maxwell

Scope of the Report

This report discusses research transparency—its forms and benefits, costs and risks, and recommendations for practice—as it pertains to qualitative research projects involving vulnerable and marginalized populations.

In the first section of the report, we draw from the QTD deliberations to first explain that “marginalization and vulnerability” are not fixed or given categories in political science research. Relying on university Institutional Review Board (IRB) definitions of these terms is insufficient, as they do not account for the range of research participants that political scientists may encounter, and IRBs are often not familiar with the contexts in which political science research is conducted. Therefore, rather than positing a universal or all-encompassing definition of vulnerability and marginalization, we suggest that these terms are constituted vis-à-vis specific research questions and contexts.

Forms and Benefits of Research Transparency

This section elaborates forms of transparency that require particularly careful consideration in the context of work with vulnerable and marginalized populations.

  1. 1. Process of generating evidence and analytic process: First, we consider transparency about the process of generating evidence and about the analytic process. Researchers must offer their readers clear and extensive details about their project’s conceptualization, implementation, and data analysis. Specifically, this means explaining the original project design, providing details about how data were actually collected, discussing any ethical challenges, and indicating how data were assessed and analyzed. This form of transparency is essential for research involving marginalized and vulnerable populations, as sharing primary data collected from them (e.g., full interview transcripts) may place them at risk.

  2. 2. Transparency toward research participants: Second, the report discusses transparency toward research participants. Scholars working with vulnerable and marginalized populations have an obligation to be open and honest with their research participants about their project’s risks and benefits, beyond what IRBs typically require. To this end, we recommend a dialogic engagement with participants, in which the researcher explains research risks and benefits and acknowledges her own privilege/position of power, while encouraging participants to question the research at any time.

  3. 3. Data sharing: Finally, we address transparency in the form of data sharing. Our consultations revealed that scholars working with marginalized or vulnerable populations do not believe that transparency requires them to provide primary research materials (e.g., interview transcripts) to their readers, as this may expose identities and compromise safety, livelihoods, etc. Instead, sharing primary data is a context-specific matter that depends heavily on the researcher’s relationship with his or her participants, and whether sharing the data will compromise participants’ safety, livelihoods, or reputations (or those of their communities).

Costs, Risks, and Limitations

We suggest that two forms of transparency—sharing primary source data or sharing specific details of how primary source data were collected—can impose undue costs on vulnerable and marginalized populations with no clear gains. Requests to share sensitive data can also pose indirect risks to researchers and particular research topics.

Risks to vulnerable and marginalized populations: Specifically, we note that vulnerability and marginalization depend very much on the nature of the research. In highly politicized research settings, for example, participants who do not immediately appear vulnerable may be rendered vulnerable by the nature of the information they possess. In similar settings, subjects may face no particular form of vulnerability or political persecution when the research is undertaken, but should the political climate change, information disclosed may prove sensitive and harmful. We thus stress the importance of deferring to the researcher’s judgment, derived from her knowledge of the research site.

One approach to the ethical risks of data sharing centers on the role of informed consent. Yet, as we discuss, merely asking participants for consent to share raw data can have adverse consequences. We thus warn researchers of the risks of introducing conversations about data and information sharing in sensitive research environments. In some contexts, even broaching the question of making data publicly available could cause research participants to lose confidence in the researcher. Such requests may cause subjects anxiety by signaling that the researcher does not fully understand the research context, thereby undermining trust and confidence in the researcher and calling into question her knowledge of the topic. By jeopardizing the participant’s trust in the researcher, requests to share data may compromise the candidness of the participant’s response and thus the quality of the data themselves.

Risks to researchers: Finally, in addition to posing risks to vulnerable and marginalized research participants, we also offer a discussion of the costs and risks that are—and would increasingly be—posed to researchers, and to the field more broadly, if journals make more explicit demands that researchers working with vulnerable and marginalized populations share sensitive primary-source data. These costs include disincentivizing research with such populations by making this research difficult to publish in top peer-reviewed journals.

Recommendations

The prevailing wisdom concerning the meaning of transparency rests on making visible “both the empirical foundation and the logic of inquiry of research.” Given that scholarship in political science is so varied in scope, method, and substance, it is evident that there is no one-size-fits-all approach to research transparency. Any innovation or improvement is therefore highly contingent on the type of research being undertaken and the nature of the question under investigation. Nevertheless, given this variation, there are a number of low-cost improvements that build on the broader objectives of fostering transparent and ethical research articulated in the previous sections. We recommend that journal editors, reviewers, and readers: 1) use a broader perspective on vulnerability that encompasses the many forms it may take, 2) refrain from imposing one universal standard for data disclosure, and 3) avoid placing the burden of exemptions for sensitive research on scholars conducting such research.

Data Availability Statement

An archive of the full text of the online QTD deliberations is available in Harvard Dataverse at: https://doi.org/10.7910/DVN/SWVFV8.

Supplementary Materials

Appendix 1. Full Biographical Information for All Authors

Appendix 2. QTD Working Groups’ Full Reports

I. Fundamentals

I.1a Epistemological and Ontological Priors: Varieties of Explicitness and Research Integrity

Marcus Kreuzer and Craig Parsons

I.1b Epistemological and Ontological Priors: Explicating the Perils of Transparency

Timothy W. Luke, Antonio Y. Vázquez-Arroyo, Mary Hawkesworth

I.2 Research Ethics and Human Subjects: A Reflexive Openness Approach

Lauren M. MacLean, Elliot Posner, Susan Thomson, Elisabeth Jean Wood

I.3 Power and Institutionalization

Rachel Beatty Riedl, Ekrem Karakoç, Tim Büthe

II. Forms of Evidence

II.1 Text-based Sources

Nikhar Gaikwad, Veronica Herrera, Robert Mickey

II.2 Evidence from Research with Human Participants

Anastasia Shesterinina, Mark A. Pollack, Leonardo R. Arriola

III. Analytic Approaches

III.1 Process Tracing and Comparative Methods

Andrew Bennett, Tasha Fairfield, Hillel David Soifer

III.2 Interpretive Methods

Lisa Björkman, Lisa Wedeen, Juliet A. Williams, Mary Hawkesworth

III.3 Ethnography and Participant Observation

Jillian Schwedler, Erica S. Simmons, Nicholas Rush Smith

III.4 Set-Analytic Approaches, Especially Qualitative Comparative Analysis (QCA)

Kendra Koivu, Carsten Q. Schneider, Barbara Vis

III.5 Non-automated Content Analysis

Zachary Elkins, Scott J. Spitzer, Jonas Tallberg

IV. Research Contexts

IV.1 Research in Authoritarian and Repressive Contexts

Eva Bellin, Sheena Chestnut Greitens, Yoshiko M. Herrera, Diane Singerman

IV.2 Research in Violent or Post-Conflict Political Settings

Ana Arjona, Zachariah Mampilly, Wendy Pearlman

IV.3 Research on Vulnerable and Marginalized Populations

Milli Lake, Samantha Majic, Rahsaan Maxwell

To view supplementary material for this article, please visit http://dx.doi.org/10.1017/S1537592720001164.

Footnotes

A list of permanent links to the Supplementary Materials provided by the authors precedes the References section.

Ana Arjona, Associate Professor, Northwestern University; Leonardo R. Arriola, Associate Professor, University of California, Berkeley; Eva Bellin, Professor, Brandeis University; Andrew Bennett, Professor, Georgetown University; Lisa Björkman, Assistant Professor,University of Louisville; Erik Bleich, Professor, Middlebury College; Zachary Elkins, Associate Professor, University of Texas at Austin; Tasha Fairfield, Associate Professor, London School of Economics; Nikhar Gaikwad, Assistant Professor, Columbia University; Sheena Chestnut Greitens, Associate Professor, The University of Texas at Austin; Mary Hawkesworth, Professor Emerita, Rutgers University; Veronica Herrera, Assistant Professor, University of California, Los Angeles; Yoshiko M. Herrera, University of Wisconsin-Madison; Kimberley S. Johnson, Professor, New York University; Ekrem Karakoç, Associate Professor, Binghamton University (SUNY); Kendra Koivu, Associate Professor, University of New Mexico; Marcus Kreuzer, Professor, Villanova University; Milli Lake, Associate Professor, London School of Economics; Timothy W. Luke, Professor, Virginia Polytechnic Institute and State University; Lauren M. MacLean, Professor, Indiana University; Samantha Majic, Associate Professor, John Jay College/City University of New York; Zachariah Mampilly, Professor, City University of New York; Rahsaan Maxwell, Professor, University of North Carolina at Chapel Hill; Robert Mickey, Associate Professor, University of Michigan; Kimberly J. Morgan, Professor, George Washington University; Sarah E. Parkinson, Assistant Professor, Johns Hopkins University; Craig Parsons, Professor, University of Oregon; Wendy Pearlman, Professor, Northwestern University; Mark A. Pollack, Professor, Temple University; Elliot Posner, Professor, Case Western Reserve University; Rachel Beatty Riedl, Professor, Cornell University; Edward Schatz, Associate Professor, University of Toronto; Carsten Q. Schneider, Professor, Central European University; Jillian Schwedler, Professor, City University of New York; Anastasia Shesterinina, Lecturer (Assistant Professor), University of Sheffield; Erica S. Simmons, Asssociate Professor, University of Wisconsin–Madison; Diane Singerman, Associate Professor, American University; Nicholas Rush Smith, Assistant Professor, City University of New York – City College; Hillel David Soifer, Associate Professor, Temple University; Scott J. Spitzer, Associate Professor, California State University, Fullerton; Jonas Tallberg, Professor, Stockholm University; Susan Thomson, Associate Professor, Colgate University; Antonio Y. Vázquez-Arroyo, Associate Professor, Rutgers University-Newark; Barbara Vis, Professor, Utrecht University; Lisa Wedeen, Professor, University of Chicago; Juliet A. Williams, Professor, University of California, Los Angeles; Elisabeth Jean Wood, Professor, Yale University; Deborah J. Yashar, Professor, Princeton University. For full biographical information and contact information for all authors, see Appendix 1 in the Supplementary Materials.

For financial support, we are grateful to the U.S. National Science Foundation under Political Science Program Grant # 1644757 for the Qualitative Transparency Deliberations Interim and Working Group Meetings. Any opinions, findings, conclusions or recommendations expressed in this article are those of the authors or participants in the Qualitative Transparency Deliberations and do not necessarily reflect the views of the National Science Foundation. Elizabeth Good and Byron Haworth provided outstanding research assistance in the organization of QTD materials and the preparation of this essay; Kathryn Alexander and Elizabeth Good helped with preparing and conducting the APSA 2016 meeting of the QTD working groups; Rob Cooper, Courtney Orning, Stephen Sample, and Josh Smith at Duke University’s Social Science Research Institute and Florian Schmidt at the Hochschule für Politik/School of Governance at the Technical University of Munich (TUM) provided critical IT support for the QTD website and online fora; Alberto Alcaraz provided excellent editorial support for the entire project. The project was made possible by contributions to the Qualitative Transparency Deliberations by hundreds of scholars, ranging in rank from PhD students to emeritus professors.

1 Kendra Koivu, who was Associate Professor of Political Science at the University of New Mexico, was a generous and important contributor to the social science methods community, including as co-editor of Qualitative and Multi-Method Research, co-founder of the Southwest Workshop on Mixed Methods Research, a member of QTD Working Group III.4 (Set-Theoretic Approaches), and a co-author of the overview essay and her working group’s report, as well as a great colleague to many of us. Kendra passed away in September 2019.

2 While the JETS was posted at http://www.dartstatement.org/#/!blank/c22sl, the page is no longer available as of this article’s publication.

3 See also Bleich and Pekkanen Reference Bleich and Pekkanen2015; Trachtenberg Reference Trachtenberg2015; Cramer Reference Cramer2015; Shih Reference Shih2015; Parkinson and Wood Reference Parkinson and Wood2015; Pachirat Reference Pachirat2015; Romney, Stewart and Tingley Reference Romney, Stewart and Tingley2015; Wagemann and Schneider Reference Wagemann and Schneider2015; Davison Reference Davison2015; Fairfield Reference Fairfield2015; Büthe and Jacobs Reference Büthe and Jacobs2015b. The full symposium can be found at https://ssrn.com/abstract=2652097.

4 See https://dialogueondart.org/petition/. Also, a number of political science journals announced and explained decisions not to sign on to the JETS. These included World Politics (Yashar Reference Yashar2016) and Perspectives on Politics (Isaac Reference Isaac2015).

6 See, in particular, the discussion of this issue in the report of working group I.3 on power and institutionalization.

7 While this article frequently differentiates between various research communities and types of researchers in the profession, we do not presume mutual exclusivity. To the contrary: many political scientists draw on multiple approaches, use various methods (even within a single project), and are members of multiple research communities. Further, virtually every quantitative study involves or builds on qualitative methods in the process of generating the data it uses; the issues discussed in the QTD reports are thus relevant to the qualitative elements of any political analysis.

8 Although the terms may mean different things to different readers, we use research transparency, openness, and explicitness interchangeably in this essay.

9 Note that the QTD’s output is not limited to the working group reports and summaries. As most of the underlying consultations took place online in written form, the text of these discussions, posted on the Harvard Dataverse at https://doi.org/10.7910/DVN/SWVFV8, should themselves be understood as part of the QTD’s contribution to disciplinary debates about qualitative research openness.

10 Full details of the QTD process can be found on the QTD website at https://www.qualtd.net/page/about and https://www.qualtd.net/page/qtd-process.

11 Participants could choose either to identify themselves or to post anonymously. A topic index of the Stage 1 posts is at https://www.qualtd.net/viewtopic.php?f=10&t=85.

12 The count is based on the July 2017 version of the discussion forums.

13 While the Steering Committee reserved the right to remove uncivil posts from the platform, the Committee did not view any post as warranting removal.

14 See also Yom Reference Yom2015.

15 The definition is adapted from the authoritarianism report (IV.1).

16 See, for instance, the reports on text-based sources (II.1), comparative methods and process tracing (III.1), content analysis (III.5), QCA (III.4), and research with vulnerable/marginalized populations (IV.3).

17 See report IV.3.

18 See the reports on evidence from research with human participants (II.2), ethnography (III.3), and research on political violence (IV.2).

19 See the report on textual sources (II.1). See also Trachtenberg Reference Trachtenberg2006: esp. 51ff.

20 On transparency about the production and selection of textual sources, see report II.1. On selection, see the report on content analysis (III.5).

21 See the ethnography report (III.3).

22 See, e.g., the report on evidence from research with human participants (II.2).

23 Ibid.

24 Ibid.

25 See the report on research in authoritarian contexts (IV.1) and research in violent contexts (IV.2).

26 Ibid.

27 See the report on research in violent contexts (IV.2).

28 See, e.g., report III.1

29 See, e.g., the reports on text-based sources (II.1) and on research with vulnerable and marginalized populations (IV.3).

30 See report IV.2.

31 See report III.1.

32 See the report on evidence from research with human subjects (II.2) and, on making textual sources findable, report II.1.

33 See the reports on ethnography (III.3) and research in violent contexts (IV.2).

34 See reports III.4 and III.5. The working group on research in violent contexts (IV.2) also reports input from colleagues who see data sharing as important for replication-based evaluation.

35 See the report on text-based sources (II.1).

36 See the report on text-based sources (II.1).

37 See the reports on text-based sources (II.1), evidence from research with human participants (II.2), and content analysis (III.5).

38 See the reports on QCA (III.4), content analysis (III.5), and research in violent contexts (IV.2).

39 See, e.g., the reports on ethics (I.2), research in violent contexts (IV.2), and research with marginalized and vulnerable populations (IV.3).

40 See report IV.2.

41 See report IV.3.

42 See the reports on evidence from research with human subjects (II.2), ethnography (III.3), and research in authoritarian contexts (IV.1).

43 See, e.g., the report on ethnography (III.3).

44 See the reports on research ethics (I.2), evidence from research with human subjects (II.2), ethnography (III.3), research in authoritarian contexts (IV.1) and in settings of political violence (IV.2), and research with vulnerable/marginalized populations (IV.3).

45 See the ethnography report (III.3).

46 See the reports on evidence from research with human subjects (II.2), research in authoritarian contexts (IV.1), in violent settings (IV.2) and with marginalized/vulnerable populations (IV.3).

47 See the report on evidence from research with human subjects (II.2).

48 See the reports on research ethics (I.2), evidence from research with human participants (II.2), ethnography (III.3) and on research in authoritarian contexts (IV.1), in settings of political violence (IV.2), and with vulnerable/marginalized populations (IV.3). See also Knott Reference Knott2019.

49 See the reports on ethics (I.2), ethnography (III.3), and research in authoritarian contexts (IV.1).

50 See the report on research in violent settings (IV.2).

51 See the report on ethics (I.2).

52 See the report on research with vulnerable/marginalized populations (IV.3).

53 See the report on research in authoritarian contexts (IV.1). More generally on the limitations of IRBs as adjudicators of ethical risk in political science research, see the reports on political violence research (IV.2) and on research with vulnerable and marginalized populations (IV.3), as well as on power and institutionalization (I.3).

54 See the report on research in violent settings (IV.2) and on evidence from research with human participants II.2). On threats to researchers working in China, see Greitens and Truex Reference Greitens and Rory2020.

55 See the reports on research ethics (I.2), evidence from research with human participants (II.2), ethnography (III.3), and research in violent settings (IV.2).

56 See the reports on research ethics (I.2), evidence from research with human participants (II.2), research in authoritarian contexts (IV.1), and research with vulnerable/marginalized populations (IV.3).

57 See the report on research in violent settings (IV.2).

58 See the reports on research ethics (I.2), power and institutionalization (I.3), ethnography (III.3), research in violent settings (IV.2), and research with vulnerable/marginalized populations (IV.3).

59 See the report on research in violent settings (IV.2).

60 See the reports on textual sources (II.1), evidence from research with human participants (II.2), content analysis (III.5), and research in authoritarian contexts (IV.1). See also Hall Reference Hall2016 for a discussion of these and related costs.

61 See the report on evidence from research with human participants (II.2).

62 See the reports on evidence from research with human participants (II.2), and ethnography (III.3) and research in violent settings (IV.2).

63 See the report on research ethics (I.2).

64 See report II.1.

65 Noted in the reports on textual sources (II.1), evidence from research with human participants (II.2), content analysis (III.5), and research in violent settings (IV.2).

66 See the report on textual sources (II.1).

67 See the report on QCA (III.4).

68 See the report on comparative methods and process tracing (III.1). See also Hall Reference Hall2016 and Trachtenberg Reference Trachtenberg2015 on the readability costs of some forms of transparency.

69 Elman and Kapiszewski Reference Elman and Kapiszewski2014, 44.

70 See, e.g., Lupia and Elman Reference Lupia and Elman2014.

71 See the report on research ethics (I.2), section III.

72 Report III.3, 6-7.

73 See the reports on “Epistemological and Ontological Priors: Varieties of Openness and Research Integrity” (I.1a) and “Epistemological and Ontological Priors: Explicating the Perils of Transparency” (I.1b).

74 See report I.1b.

75 See report I.1a, as well as a discussion in Parsons Reference Parsons2015.

76 See, in particular, the reports on epistemological and ontological priors (I.1a and I.1b), as well as the reports on interpretive and ethnographic methods (III.2 and III.3).

77 See reports I.1b and III.2.

78 Report on interpretive methods (III.2), 2.

79 Report on “Epistemological and Ontological Priors: Explicating the Perils of Transparency” (I.1b), 19. See also the report on interpretive methods (III.2).

80 Report on interpretive methods (III.2), 7.

81 Report I.2.

82 The reports on research in violent settings (IV.2) and on research with marginalized/vulnerable populations (IV.3) similarly emphasize the importance of transparency toward research participants. The report on research in authoritarian contexts (IV.1) signals disagreement among scholars working in repressive settings around the wisdom of full candor with participants about research purposes and funding sources.

83 The report on research in authoritarian contexts (IV.1) similarly argues for a process of case-by-case editorial decision-making informed by conversations between editors and authors about transparency choices that might affect human participants.

84 Report III.4.

85 While the report notes disagreement among set-theoretic scholars on certain analytic issues, the authors report that there is no substantial disagreement on transparency matters.

86 Reports II.1, III.1, and III.5.

87 Indeed, it is this deep divide that led the working group on epistemological and ontological priors (I.1) to the decision to produce two separate reports.

88 See report III.3.

89 See, for instance, the working groups on evidence from research with human participants (II.2) and on research in authoritarian contexts (IV.1), in settings of political violence (IV.2), and with vulnerable/marginalized populations (IV.3). Each of these groups was composed of scholars who do interpretive research and of scholars who work in a more positivist vein.

90 The groups that took the strongest position on data sharing are the QCA group (III.4) and the content analysis group (III.5), but they advocate a general data-sharing expectation only when the data take quantitative form.

91 See the report on research in violent settings (IV.2).

92 See the report on research in violent settings (IV.2).

93 See the report on text-based sources (II.1). The report on evidence from research with human participants (II.2) similarly points to hyperlinked citations accompanied by source documents as a potentially useful approach to data sharing when used with due attention to human subjects protection concerns.

94 See the report on text-based sources (II.1).

95 See, especially, the reports on “Epistemological and Ontological Priors: Explicating the Perils of Transparency” (I.1b) and on Interpretive Methods (III.2).

96 This includes explicit discussion of the value of this form of transparency in reports II.1, II.2, III.1, III.3, III.5, IV.1, IV.2, and IV.3. The issue was of little relevance to the QCA group’s deliberations (III.4) as these were focused strictly on a specific set of analytic algorithms.

97 See the report on text-based sources (II.1).

98 See the report on evidence from research with human participants (II.2).

99 See the report on evidence from research with human participants (II.2). See also Bleich and Pekkanen Reference Bleich, Pekkanen and Mosley2013.

100 See the reports on evidence from research with human participants (II.2) and and the report on research with vulnerable/marginalized populations (IV.3).

101 See the report on comparative methods and process tracing (III.1).

102 See the report on content analysis (III.5).

103 See the reports on text-based sources (II.1), evidence from research with human participants (II.2), comparative methods and process tracing (III.1), ethnography (III.3), and research in authoritarian contexts (IV.1). See also Kapiszewski, Maclean, and Read Reference Kapiszewski, MacLean and Read2015, 392.

104 Report I.2.

105 Explicit discussion of the importance of this form of transparency features in reports II.1, II.2, III.1, III.3, III.4, IV.1, IV.2, and IV.3.

106 See the reports on comparative methods and process tracing (III.1) and authoritarian contexts (IV.l).

107 See the report on ethnography (III.3).

108 See the reports on QCA (III.4) and research in violent settings (IV.2). The report on comparative methods and process tracing (III.1) also points to debates about whether the sequence in which hypotheses were developed and evidence examined is of analytical relevance; see also Fairfield and Charman Reference Fairfield and Charman2019. On challenges of integrating deduction and induction in process tracing, see Hall Reference Hall2013, esp. pp. 26-28.

109 See the report on comparative methods and process tracing (III.1).

110 See the report on comparative methods and process tracing (III.1).

111 See reports I.2, II.2, III.3, IV.1, IV.2, and IV.3.

112 See reports I.2, IV.1, IV.2, and IV.3.

113 See reports I.2, II.2, III.3, IV.1, IV.2, and IV.3.

114 Report on research in violent settings (IV.2).

115 Report III.3. See also the reports on evidence from research with human participants (II.2) and research in violent settings (IV.2).

116 Report IV.2.

117 See report III.2.

118 See Saunders Reference Saunders2014; Snyder Reference Snyder2014.

119 This principle is most clearly articulated in the ethics group’s report (I.2).

120 The Guide does not explicitly lodge this responsibility with any actor, though in stating that “scholars may be exempted” from transparency requirements, it seems to imply that the decision to grant or not grant the exemption lies with an actor other than the researcher-author, such as the journal editor. See, specifically, clause 6.4, APSA Committee on Professional Ethics, Rights and Freedoms 2012.

121 For reasons spelled out in our full report, we use “research explicitness” even when discussing what on various QTD threads and in a large number of bilateral and small group offline exchanges was often discussed as “research transparency.”

122 Note that we do not provide definitions of all QCA-related concepts in this summary; we do this in our full report.

References

APSA Committee on Professional Ethics, Rights and Freedoms. 2012. A Guide to Professional Ethics in Political Science. Washington, DC: American Political Science Association.Google Scholar
Bleich, Erik, and Pekkanen, Robert. 2013. “How to Report Interview Data.” In Interview Research in Political Science, ed. Mosley, Layna, 84105. Ithaca, NY: Cornell University Press.Google Scholar
Bleich, Erik, and Pekkanen, Robert 2015. “Data Access, Research Transparency, and Interviews: The Interview Methods Appendix.” Qualitative & Multi-Method Research 13(1): 813.Google Scholar
Brady, Henry E., and Collier, David, eds. 2010. Rethinking Social Inquiry: Diverse Tools, Shared Standards. 2d ed. New York: Rowman & Littlefield Publishers.Google Scholar
Büthe, Tim, and Jacobs, Alan M.. 2015a. “Transparency in Qualitative and Multi-Method Research: Introduction to the Symposium.” Qualitative & Multi-Method Research 13(1): 28.Google Scholar
Büthe, Tim, and Jacobs, Alan M. 2015b. “Research Transparency for a Diverse Discipline: Conclusion to the Symposium.” Qualitative & Multi-Method Research 13(1): 5264.Google Scholar
Christian, Thu-Mai Lewis, Lafferty-Hess, Sophia, Jacoby, William G., and Carsey, Thomas M.. 2018. “Operationalizing the Replication Standard: A Case Study of the Data Curation and Verification Workflow for Scholarly Journals.” (https://osf.io/j9yn7/download/).10.31235/osf.io/cfdbaCrossRefGoogle Scholar
Clemens, Michael A. 2017. “The Meaning of Failed Replications: A Review and Proposal.” Journal of Economic Surveys 31(1): 326–42.10.1111/joes.12139CrossRefGoogle Scholar
Cramer, Katherine. 2015. “Transparent Explanations, Yes. Public Transcripts and Fieldnotes, No: Ethnographic Research on Public Opinion.” Qualitative & Multi-Method Research 13(1): 1720.Google Scholar
Davison, Andrew. 2015. “Hermeneutics and the Question of Transparency.” Qualitative & Multi-Method Research 13(1): 4347.Google Scholar
Elman, Colin, and Kapiszewski, Diana. 2014. “Data Access and Research Transparency in the Qualitative Tradition.” PS: Political Science & Politics 47(1): 4347.Google Scholar
Elman, Colin, and Kapiszewski, Diana 2018. “The Qualitative Data Repository’s Annotation for Transparent Inquiry (ATI) Initiative.” PS: Political Science & Politics 51(1): 36.Google Scholar
Fairfield, Tasha. 2015. “Reflections on Analytic Transparency in Process Tracing Research.” Qualitative & Multi-Method Research 13(1): 4751.Google Scholar
Fairfield, Tasha, and Charman, Andrew. 2019. “A Dialogue with the Data: The Bayesian Foundations of Iterative Research in Qualitative Social Science.” Perspectives on Politics 17(1): 154–67.10.1017/S1537592718002177CrossRefGoogle Scholar
Fujii, Lee Ann. 2012. “Research Ethics 101: Dilemmas and Responsibilities.” PS: Political Science & Politics 45(4): 717–23.Google Scholar
George, Alexander L., and Bennett, Andrew. 2005. Case Studies and Theory Development in the Social Sciences. Cambridge: MIT Press.Google Scholar
Goffman, Alice. 2015. On the Run: Fugitive Life in an American City. Reprint ed. New York: Picador.Google Scholar
Greitens, Sheena Chestnut, and Rory, Truex. 2020. “Repressive Experiences among China Scholars: New Evidence from Survey Data.” The China Quarterly 242: 349375.10.1017/S0305741019000365CrossRefGoogle Scholar
Hall, Peter A. 2013. “Tracing the Progress of Process Tracing.” European Political Science 12(1): 2030.10.1057/eps.2012.6CrossRefGoogle Scholar
Hall, Peter A 2016. “Transparency, Research Integrity, and Multiple Methods.” Comparative Politics Newsletter 26(1): 2831.Google Scholar
Jacobs, Alan M. and Büthe, Tim. 2020. “Deliberative Archive for: The Qualitative Transparency Deliberations: Insights and Implications.” https://doi.org/10.7910/DVN/SWVFV8, Harvard Dataverse, V1.CrossRefGoogle Scholar
Kapiszewski, Diana, MacLean, Lauren M., and Read, Benjamin L.. 2015. Field Research in Political Science: Practices and Principles. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
King, Gary. 1995. “Replication, Replication.”PS: Political Science & Politics 28(3): 444–52.Google Scholar
Knott, Eleanor. 2019. “Beyond the Field: Ethics after Fieldwork in Politically Dynamic Contexts.” Perspectives on Politics 17(1): 140–53.10.1017/S1537592718002116CrossRefGoogle Scholar
Isaac, Jeffrey C. 2015. “For a More Public Political Science.” Perspectives on Politics 13(2): 269–83.10.1017/S1537592715000031CrossRefGoogle Scholar
Lupia, Arthur, and Elman, Colin. 2014. “Openness in Political Science: Data Access and Research Transparency: Introduction.” PS: Political Science & Politics 47(1): 1942.Google Scholar
Monroe, Kristen Renwick. 2018. “The Rush to Transparency: DA-RT and the Potential Dangers for Qualitative Research.” Perspectives on Politics 16(1): 141–48.10.1017/S153759271700336XCrossRefGoogle Scholar
Mosley, Layna, ed. 2013. Interview Research in Political Science. Ithaca, NY: Cornell University Press.10.7591/9780801467974CrossRefGoogle Scholar
Pachirat, Timothy. 2015. “The Tyranny of Light.” Qualitative & Multi-Method Research 13(1): 2731.Google Scholar
Parkinson, Sarah Elizabeth and Wood, Elisabeth Jean. 2015. “Transparency in Intensive Research on Violence: Ethical Dilemmas and Unforeseen Consequences.” Qualitative & Multi-Method Research 13(1): 2227.Google Scholar
Parsons, Craig. 2015. “Before Eclecticism: Competing Alternatives in Constructivist Research.” International Theory 7(3): 501–38.CrossRefGoogle Scholar
Romney, David, Stewart, Brandon M., and Tingley, Dustin. 2015. “Plain Text? Transparency in Computer-Assisted Text Analysis.” Qualitative & Multi-Method Research 13(1): 3238.Google Scholar
Saunders, Elizabeth N. 2014. “Transparency without Tears: A Pragmatic Approach to Transparent Security Studies Research.” Security Studies 23(4): 689–98.10.1080/09636412.2014.970405CrossRefGoogle Scholar
Schatz, Edward, ed. 2013. Political Ethnography: What Immersion Contributes to the Study of Power. Chicago: University of Chicago Press.Google Scholar
Schwartz-Shea, Peregrine, and Yanow, Dvora. 2016. “Legitimizing Political Science or Splitting the Discipline? Reflections on DA-RT and the Policy-making Role of a Professional Association.” Politics & Gender 12(3): E11, 119.10.1017/S1743923X16000428CrossRefGoogle Scholar
Shih, Victor. 2015. “Research in Authoritarian Regimes: Transparency Tradeoffs and Solutions.” Qualitative & Multi-Method Research 13(1): 2022.Google Scholar
Snyder, Jack. 2014. “Active Citation: In Search of Smoking Guns or Meaningful Context?Security Studies 23(4): 708–14.10.1080/09636412.2014.970409CrossRefGoogle Scholar
Trachtenberg, Marc. 2006. The Craft of International History: A Guide to Method. Princeton, NJ: Princeton University Press.Google Scholar
Trachtenberg, Marc 2015. “Transparency in Practice: Using Written Sources.” Qualitative & Multi-Method Research 13(1): 1317.Google Scholar
Tripp, Aili Mari. 2018. “Transparency and Integrity in Conducting Field Research on Politics in Challenging Contexts.” Perspectives on Politics 16(3): 728–38.10.1017/S1537592718001056CrossRefGoogle Scholar
Van Evera, Stephen. 1997. Guide to Methods for Students of Political Science. Ithaca, NY: Cornell University Press.Google Scholar
Wagemann, Claudius, and Schneider, Carsten Q.. 2015. “Transparency Standards in Qualitative Comparative Analysis.” Qualitative & Multi-Method Research 13(1): 3842.Google Scholar
Wedeen, Lisa. 2010. “Reflections on Ethnographic Work in Political Science.” Annual Review of Political Science 13: 255–72.CrossRefGoogle Scholar
Yashar, Deborah. 2016. “Editorial Trust, Gatekeeping, and Unintended Consequences.” Comparative Politics Newsletter 26(1): 5764.Google Scholar
Yom, Sean. 2015. “From Methodology to Practice: Inductive Iteration in Comparative Research.” Comparative Political Studies 48(5): 616–44.CrossRefGoogle Scholar
Supplementary material: PDF

Jacobs et al. supplementary material

Appendix 1. Full Biographical Information for All Authors
Download Jacobs et al. supplementary material(PDF)
PDF 218.3 KB
Supplementary material: PDF

Jacobs et al. supplementary material

Appendix 2. QTD Working Groups’ Full Reports
Download Jacobs et al. supplementary material(PDF)
PDF 2.3 MB
Supplementary material: Link

Jacobs et al. Dataset

Link