Hostname: page-component-68c7f8b79f-wfgm8 Total loading time: 0 Render date: 2026-01-16T18:18:36.291Z Has data issue: false hasContentIssue false

Elite Interviewing in Political Science: A Meta-Analysis of Reporting Practices

Published online by Cambridge University Press:  02 January 2026

Rights & Permissions [Opens in a new window]

Abstract

Elite interviewing is a valuable tool that helps political scientists to understand decision making, trace political processes, and access insider knowledge. Yet despite its prevalence, we know surprisingly little about how elite interviews are conducted and reported in the discipline. This study addresses this gap by examining elite interviewing practices and transparency using an original dataset of articles published in 13 leading political science journals between 2000 and 2023. Drawing on article content and supplementary materials, I analyze trends in the use and quality of elite interviews, highlighting an increasing reliance on this method, particularly in comparative politics. Findings show promising improvements in reporting practices over time. Systematic reporting and the inclusion of online appendices significantly enhance transparency, offering detailed insights into ethical considerations, confidentiality, and data-sharing practices. This study underscores the evolving rigor in reporting elite interviewing, reflecting its enduring relevance and growing methodological sophistication in political science research.

Information

Type
Reflection
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of American Political Science Association

Elite interviewing has long been a valuable tool in political science, offering unique insights into political dynamics and providing rich, reliable data for understanding how politics operates. Since Fenno’s (Reference Fenno1978) landmark study, demonstrating how members of Congress engage with constituents and showing that elites are both accessible and insightful sources of knowledge, elite interviewing has become a cornerstone method in the field. This paper examines how elite interviewing is currently practiced and evaluates the extent to which researchers adhere to established reporting standards. Although several studies have explored elite interviewing (see Ellinas Reference Ellinas2023; Markiewicz Reference Markiewicz2024; Ntienjom Mbohou and Tomkinson Reference Mbohou, Félix and Tomkinson2022), the transparency of reporting—particularly how methodological decisions are documented—has received comparatively less attention. While theoretical frameworks and best-practice guidelines exist (Bleich and Pekkanen Reference Bleich, Pekkanen and Mosley2013; Kapiszewski and Karcher Reference Kapiszewski and Karcher2021b), we lack systematic and empirical assessments of how these standards are applied in published research. To address this gap, I draw on an original dataset of all journal articles published between 2000 and 2023 that employed elite interviewing in major political science journals. This study analyzes how elite interviewing is used in practice and how reporting standards have evolved over time.

Defining who qualifies as an “elite” remains a complex challenge in political science, as the term itself is elusive and context dependent. For the purposes of this study, I adopt a minimal definition of elites, characterizing them as “a group of individuals who hold or have held a privileged position in society and, as such, are likely to have had more influence on political outcomes than general members of the public” (Richards Reference Richards1996, 199). Despite elite interviewing’s central role in theory development and empirical research (Berry Reference Berry2002; Leech Reference Leech2002), we still know surprisingly little about who uses this method, in what regional contexts, and for what substantive topics. Are elite interviews predominantly used by comparativists, international relations scholars, or Americanists? What are the prevailing practices around transparency, sampling, and data sharing? And what formats are used to structure and report these methods?

My analysis finds that reporting practices in elite interviewing are often inconsistent. Many articles omit essential information about the identities of interviewees, recruitment strategies, and sampling decisions. However, when articles include supplementary materials, especially appendices, the quality and transparency of reporting improves significantly. These materials often detail ethical procedures, recruitment strategies, researcher reflexivity, and issues related to anonymity and data sharing. This article argues that enhancing reporting practices not only strengthens the credibility and rigor of elite interviewing but also offers clearer guidance for future scholars engaging with this method.

Elite interviews can strengthen political science research by diversifying methodological approaches, improving data quality, and enabling triangulation for more robust descriptive and causal analysis. However, to fully realize these benefits, researchers must provide clear information about their interview methodology, sampling strategies, and overall research design. This transparency allows readers to evaluate the credibility of the findings and the strength of the inferences. At the same time, elite interviewing presents unique challenges, such as defining who qualifies as an elite, selecting appropriate participants, and managing potential biases tied to institutional roles or personal agendas. Transparent reporting of how interviews are conducted and how such challenges are addressed is essential to mitigate these issues.

While promoting best practices in reporting—such as comprehensive documentation and transparency—is essential, we must also avoid overburdening researchers with excessively detailed requirements, particularly those with limited resources or under the pressure of rapid publication timelines (Closa Reference Closa2021). Thus, it is important to differentiate between conducting research in accordance with ethical standards and reporting the details of that research. Scholars routinely meet the ethical guidelines set by institutional review boards (IRBs) and publish their findings in reputable journals after thorough peer review. The issue lies not in the research itself, but in the varying levels of detail provided in the reporting of that research. Striking the right balance in reporting is crucial for advancing the field while respecting researchers’ time and resources.

This article is structured as follows. In the next section, I discuss reporting practices in elite interviewing, providing insights into areas for improvement in transparency and ethical rigor. Then I offer a detailed description of the dataset. Later, I will examine trends in elite interviews within the discipline, showing that it is a common tool across various substantive areas of political science research, but especially prevalent in comparative politics (hereafter CP). I then demonstrate that while there is significant room for improvement in current reporting practices, a promising trend has emerged in recent years. This trend reflects greater attention to ethical issues and increased transparency in the conduct of interviews, particularly through supplementary information. In the final section, I conclude by offering some practical solutions, recognizing the limitations of this analysis, and identifying ways forward. Overall, this article aims to contribute to ongoing qualitative and mixed-methods research efforts to improve reporting practices (for instance, Bleich and Pekkanen Reference Bleich, Pekkanen and Mosley2013; Kapiszewski and Karcher Reference Kapiszewski and Karcher2021b).

A Case for Reporting Practices in Elite Interviewing

Political scientists study a wide range of elites—tribal leaders, guerrilla commanders, lobbyists, representatives of nonstate organizations, rank-and-file parliamentary representatives, court judges, and heads of government—who all play pivotal roles in the political arena and provide key insights into the dynamics of politics. To understand these actors and their influence, we often turn to elite interviewing, which allows us to gather information directly from the individuals driving political action. Interviewing elites offers unique access to the inner workings of political institutions, such as the US Supreme Court (Clark Reference Clark2010), interest groups (Nownes Reference Nownes2006), the motivations and ambitions of political actors (Beckmann Reference Beckmann2010), the knowledge-building process of journalists and public opinion researchers (Cramer and Toff Reference Cramer and Toff2017), and internal deliberations shaping policy outcomes that are not available in public records (Carnegie and Carson Reference Carnegie and Carson2020).

Yet the value of insights gained from elite interviews and any interview-based research is only as strong as the transparency with which the research is reported. These practices ensure that readers can critically evaluate the inferences drawn from the interview data. Following Bleich and Pekkanen (Reference Bleich, Pekkanen and Mosley2013), I argue that transparency in how we identify, select, and interview respondents is essential for assessing the reliability and validity of our findings. Ongoing discussions in the qualitative and mixed-methods community note that the iterative process between data collection and analysis, particularly in qualitative interviewing, creates fragile conditions for assessing the data produced (Bleich and Pekkanen Reference Bleich, Pekkanen and Mosley2013; Small and Calarco Reference Small and Calarco2022). Because the researcher generates the data and can make “in-the-moment decisions,” it is important to pay close attention to the data generation process and how we report the findings in our writing (Small and Calarco Reference Small and Calarco2022).

Challenges such as securing interview access, establishing rapport, conducting analysis, and maintaining ethics are essential to understanding interview-based research, and therefore require thorough documentation and reflection (Ellinas Reference Ellinas2023). Such documentation not only helps researchers to convey the strength of their work but also enables their audience to assess the quality of the evidence presented. Moreover, better reporting practices benefit the broader field of social science research. Qualitative and mixed-methods research training remains underdeveloped in contemporary graduate programs (particularly in the US), and without proper use and discussion of these practices, we lack the necessary know-how. One unwritten rule of scientific research is that we learn best from reading and mimicking others. If our reporting practices lack certain quality checks, we do an injustice to graduate students and junior faculty who rely on these examples to develop their skills and knowledge.

Existing methodological rigor and transparency debates push qualitative, quantitative, or mixed-methods scholars to be adamant in conducting and disseminating research. Adhering to rigorous methodological standards for interviewing—or any qualitative method—is essential to contribute robustly to political science research (Aberbach and Rockman Reference Aberbach and Rockman2002). These standards include selecting interview subjects carefully, determining appropriate sample sizes, transparently reporting sampling strategies, and clearly documenting the interview process. Researchers are also expected to report how they recruited participants, especially elites; describe the interview format; and address ethical considerations such as anonymity and data sharing, which is often documented in supplementary materials. Given these expectations, I emphasize such practices throughout this article. While these standards are crucial across various research traditions, I recognize that the importance of certain practices may vary depending on epistemological perspectives.

In addition, it should also be noted that the practices we adopt as political science researchers are not independent of those adopted and revered by journals and reviewers. Despite recent efforts by major political science journals to diversify their methodological outlook, a common misconception persists that interview research lacks the rigor and scientific value of quantitative research (Moravcsik Reference Moravcsik2010).Footnote 1 In part, this is because we lack standards against which we can assess quality in elite interviewing. Sometimes it is possible, ethical, and safe to mimic quantitative research practices like pre-analysis plans, replication files, and data-sharing practices (Bentancur, Rodríguez, and Rosenblatt Reference Bentancur, Rodríguez and Rosenblatt2021; Kapiszewski Reference Kapiszewski, Cyr and Goodman2024).Footnote 2

In other cases, however, mimicking quantitative research is not meaningful or can lead to unethical and dangerous practices for the researcher and human subjects involved. Therefore, treating qualitative research as inferior to quantitative research in reporting practices is misguided. We do have predecessors like Bleich and Pekkanen (Reference Bleich, Pekkanen and Mosley2013) to help us achieve solid transparency in reporting, but there needs to be an established understanding of reporting practices that ensures quality control. Responsibility lies largely with researchers themselves. However, the shortcomings of graduate training, particularly in qualitative methods, and the lack of enforcement by journals and reviewers also contribute to inconsistent reporting practices.Footnote 3

One proposed solution to improve reporting is the inclusion of appendices that provide further details on the design and conduct of elite interviews (Bleich and Pekkanen Reference Bleich, Pekkanen and Mosley2013; Jacobs et al. Reference Jacobs, Büthe, Arjona, Arriola, Bellin, Bennett and Björkman2021; Kapiszewski and Karcher Reference Kapiszewski and Karcher2021b; Small and Calarco Reference Small and Calarco2022). For example, Bleich and Pekkanen (Reference Bleich, Pekkanen and Mosley2013) introduced the concept of an “Interview Methods Appendix” as a concrete solution by making the data and methodologies accessible for scrutiny. Appendices enable alternative explanations by allowing others to analyze the data and draw their conclusions. This openness mitigates the risks of cherry picking data to support a particular hypothesis. Additionally, online appendices establish standards that promote the reproducibility of findings by providing detailed documentation of the research process, which others can follow and replicate.Footnote 4

Another approach to improving transparency is archiving interview data in repositories such as the Qualitative Data Repository. When ethically and legally feasible, making interview transcripts, summaries, or coding frameworks available enhances the credibility of research and allows other scholars to engage more deeply with the data. However, archiving qualitative data presents challenges, particularly regarding confidentiality, consent, and the sensitivity of interviews. Researchers must carefully navigate these ethical and logistical concerns while balancing transparency with the protection of their interviewees. Encouraging researchers to adopt these practices—whether through detailed appendices or secure data repositories—can strengthen the field’s methodological rigor while fostering a culture of accountability and knowledge sharing.

While robust reporting practices benefit researchers, readers, and participants by ensuring that data are accessible, verifiable, and ethically handled, they also require careful planning, time, and resources. This is not to diminish their value but to acknowledge that transparency involves a careful balancing act. Therefore, advancing transparency in qualitative research should be seen as an ongoing process—one that thrives on collective support from journals, researchers, and the broader academic community.

The Dataset of Elite Interviewing Practices

Procedures and Coding Criteria

To better understand the use and reporting of elite interviewing, I constructed an original dataset of published peer-reviewed journal articles that use elite interviewing in political science. I used the following inclusion and exclusion criteria. First, I included only articles published in major political science journals, covering general journals and substantive area journals on international relations and CP. Including these substantive area journals is important as they are more likely to publish qualitative research, and scholars in these subfields tend to rely more on interviewing and other qualitative methods compared with publications that showcase novel and advanced quantitative methods in general journals. I excluded books, edited volumes, and research notes for practical reasons. While books and edited volumes certainly feature important qualitative fieldwork, including them would require considerable time and research efforts, making it impractical for this analysis. Within these journals, I focused on articles that used elite interviews as a primary form of interviewing.Footnote 5 I used different metrics to select these political science journals.Footnote 6 This procedure resulted in a sample of 13 journals: American Journal of Political Science, American Political Science Review, British Journal of Political Science, Comparative Political Studies, Comparative Politics, Democratization, International Organization, International Security, International Studies Quarterly, Journal of Peace Research, the Journal of Politics, Political Research Quarterly, and World Politics. Footnote 7

Second, my coverage spans from 2000 to 2023, a period in which the use of qualitative and mixed-methods research enjoyed renewed growth within our discipline (Bennett and Elman Reference Bennett and Elman2007).Footnote 8 I hand-coded each article, recording the journal name, author information, the year of publication, and the subfield of the research topic. In terms of interviewing practices, I coded relevant details on the sampling and recruitment procedures, descriptions of the interview subjects, whether the interviews were anonymized, approval information from an IRB or ethics committee, whether the interviews were conducted in person or online, and geographic coverage. For articles relying on multiple methods, I distinguished whether elite interviews were the primary evidence or complementary to another qualitative or quantitative approach. In most meta-analysis research, researchers often rely on the abstracts of journal articles for data coding. I refrained from relying solely on abstracts to determine the methodological approach and content of articles, as abstracts do not always indicate the method employed. To code the information I listed above, I carefully examined the article in full and, when available, the supplementary information or online appendices.

Elite Interviewing Coverage in Political Science Journals

The resulting dataset covers 23 years, 13 journals, and 145 unique articles, of which 91 (around 63%) are single authored, and 54 (around 37%) are coauthored. Out of the 14,750 journal articles published in these journals between 2000 and 2023, around 1% of them use elite interviews. While this might seem like a small proportion, it is significant given that only 12% of all articles in these journals use purely qualitative methods, and just 3% incorporate mixed methods. Although relatively uncommon, elite interviewing plays a critical role in generating context-rich, process-oriented insights that are often inaccessible through other methods. However, its prevalence varies across journals. As shown in table 1, no journal exceeds 3% in its use of elite interviews. World Politics has the highest percentage (2.59%) of articles using elite interviews, whereas the Journal of Politics and Political Research Quarterly have the lowest (0.28% and 0.32%, respectively).

Table 1 Number of Interviews in Journals (2000–23)

The presence of articles that use elite interviews varies substantially across journals, as seen in table 1. Compared with other substantive areas since 2000, CP journals have featured articles that use elite interviews substantially more than other journals. Almost 48% of the articles with elite interviewing have been published by three CP journals: Comparative Political Studies, Comparative Politics, and Democratization. Regarding the three main substantive areas identified by my coding, 71% of the articles were from CP, 22% were from international relations, and 7% were from American politics. In cases where elite interviewing was the sole evidence, all articles came from CP journals.Footnote 9

Descriptive findings from this dataset highlight several interesting insights. As shown in figure 1, the number of articles on elite interviewing published in mainstream political science journals has increased substantially in the past 20 years. While there has been an increase in the use of elite interviews, research relying on elite interviews takes only a small portion of the total articles published in these journals. This disparity suggests that elite interviewing is a specialized yet essential tool for researchers focused on political elites and their influence on key issues.

Figure 1 Distribution of Articles Using Elite Interviews over the Years

Approximately 57% of the articles in my sample employed a qualitative approach, often in combination with other qualitative research designs. The remaining articles adopted a mixed-methods strategy, combining elite interviewing with quantitative methods. Among those using only qualitative methods, 10% relied exclusively on elite interviews. In the mixed-methods group, elite interviewing was the primary qualitative method in another 10% of cases. In 6% of the articles, elite interviews were included but not used as the main form of evidence. In most cases, elite interviews were accompanied by other kinds of qualitative evidence (e.g., case studies, ethnographic work, media analysis, content analysis) and quantitative evidence (e.g., experiments, quasi-experimental settings, panel data analysis, text analysis).Footnote 10

I also explored the main topics covered in scholarship using elite interviews. In identifying these topics, I adapted the categories developed by Cammett and Kendall (Reference Cammett and Kendall2021)Footnote 11 and coded 13 different topics covering different substantive areas of political science research: political regimes, which include regime transitions, autocracies, and democratization; inter- and intrastate conflict and political violence; political institutions; global governance and state behaviors; elections and voting behavior; party politics; political economy and development, which includes research on clientelism and the international economy; gender; human rights and security; social mobilization; migration; identity, culture, and norms; and religion and politics. Table 2 shows the distribution of topics covered in these articles. Research on political regimes and conflict accounts for the largest share, in which political elites, government officials, policy makers, and bureaucrats are interviewed. Research on institutions, global governance, electoral politics, and party politics accounts for 39% of the articles. The rest of the topics have infrequent distribution. The least amount of attention is devoted to religion and identity.

Table 2 Topics Covered in Elite Interviews

Geographically, most elite interviews in this dataset focus on single-country studies (66%). This is not surprising, as conducting interviews requires linguistic skills, deep knowledge of the political context, and significant time and monetary resources, making single-country research more feasible. These interviews also span diverse regions (as shown in figure 2), with nearly half conducted in African and European countries. This concentration reflects regional research priorities and distinctive political contexts. African countries often serve as key sites for studying democratization, governance, and conflict resolution, while European countries attract attention due to their diverse political systems and long-standing traditions of elite engagement.

Figure 2 Regional Distribution of Elite Interviewing

Notes: “Global” refers to cases where the research topic involved several countries in different continents.

Assessment of Reporting Practices in Elite Interview Research

Apart from journal-level trends, I also examined reporting practices in elite interviewing. I evaluated reporting in these articles based on nine key dimensions: sample size, recruitment strategies, modes of conducting interviews, interview structure, sample description, anonymity, ethical considerations, data sharing, and the inclusion of supplementary appendices. These categories were determined using an inductive-deductive approach, ensuring a balance between analytical expectations and the insights gained from this data.

Certain categories, such as sample size, interview structure, and interview modes, represent fundamental aspects of interview research that have garnered significant attention in the literature. These elements are vital in evaluating the rigor and reproducibility of qualitative research involving elites since they represent an “overlap between interpretivist and positivist interview research” (Mosley Reference Mosley and Mosley2013, 11). However, beyond these foundational categories, my analysis also incorporated additional variables, like data sharing, the provision of supplementary information, and the use of appendices based on trends observed within the articles themselves. By documenting how frequently authors disclose critical information such as interview protocols, anonymization practices, and IRB information, this analysis provides a deeper understanding of the current state of transparency in elite interviewing.

Sample Size

Determining an appropriate sample size in interview research is challenging due to ongoing debates about the best conceptual approaches and the lack of clear, practical guidelines. While some researchers implicitly suggest that larger sample sizes can improve research quality (King, Keohane, and Verba Reference King, Keohane and Verba1994), the relationship between sample size and research quality remains contested. Recently, Gonzalez-Ocantos and Masullo (Reference Gonzalez-Ocantos and Masullo2024) offered a crucial critique, arguing that the relevance of the sample, rather than its size, is what truly matters. I do not argue that a specific number of interviews is inherently necessary, minimally acceptable, or better; instead, I emphasize the importance of reporting and justifying the sample size in light of the research question and objectives, regardless of methodological and epistemological traditions. The nature of the topic, the theoretical model, and the research question being explored should guide the determination of the population and, by extension, the sample size.

I examined whether and how scholars report their sample size in my analysis. In the articles analyzed, the sample size varies widely, ranging from two to 513 interviewees (see table 3). Yet surprisingly, 37 articles (around 26%) did not disclose their sample size, either in the main text or in the supplementary materials. This omission is particularly concerning as it undermines transparency and hinders readers’ ability to evaluate the rigor and validity of the research. It also reflects broader challenges within the field, where editorial and peer review standards for qualitative methodological reporting may be inconsistently applied. These omissions, spanning from 2000 to 2023, suggest that reporting sample size has not been a consistently adopted practice. This is neither a recent development nor a relic of the past but rather a persistent issue in scholarly reporting.

Table 3 Number of Interviews Conducted

Note: n = 145.

Researchers may omit the sample size for various reasons, including a belief in its irrelevance or a lack of enforcement by journal editors and reviewers. While scholars from a positivist tradition often view sample size as a key indicator of methodological rigor, emphasizing diversity and representativeness, interpretivist researchers tend to prioritize the depth and contextual richness of interviews over numerical thresholds. In some cases, when a study yields rich and meaningful data, disclosing sample size or saturation details may be considered unnecessary (Collett Reference Collett2024). However, as the field increasingly embraces mixed-methods approaches, greater transparency in methodological choices remains essential. Clearly articulating research decisions enables readers to evaluate the credibility of findings and draw their own informed conclusions.

Sampling Strategies

Explicit reporting of sampling strategies and the sample frame enhances research transparency and strengthens readers’ confidence in the findings (Bleich and Pekkanen Reference Bleich, Pekkanen and Mosley2013). In my sample, approximately 71% of the articles did not discuss recruitment procedures in the main text or supplementary materials. This lack of emphasis on recruitment is a significant concern, particularly given that these interviewees were elites, an often hard-to-reach population (Goldstein Reference Goldstein2002). Among those studies that reported this information, half of the articles used purposive sampling (i.e., nonrandom sampling). Specifically, these articles relied on snowball sampling, while other cases employed either other forms of purposive sampling or alternative sampling strategies. In the remaining cases, researchers adapted their sampling approaches to fit their specific needs or the political context. For example, researchers often relied on lists of possible participants, cold calls, and other recruitment strategies to generate a pool of potential interviewees.

In elite interviewing, the inability to speak with certain elites is often as important as the conversations that do occur. Those we cannot interview offer important data points that could reveal more about the research topic and the position of those elites on the topic. Therefore, reporting these limitations is particularly relevant. While most articles offered only brief descriptions of their sampling process, a few provided detailed accounts in supplementary appendices, including how interviewees were contacted, response rates, and reasons for nonresponses. Notably, some studies offered exemplary transparency. For instance, Bush, Donno, and Zetterberg (Reference Bush, Donno and Zetterberg2024) and Mir (Reference Mir2018) explicitly referenced Bleich and Pekkanen’s (Reference Bleich, Pekkanen and Mosley2013) guidelines and thoroughly discussed their sampling frames, nonresponse rates, and the challenges involved in accessing certain elites.

Sample Description

Given the elusiveness of the concept of “elite,” reporting who researchers are trying to interview is crucial, providing clarity and context for the study’s findings. While some scholars argue that sample representativeness is an important aspect of interview research (Leech Reference Leech2002), a detailed description of the interviewees can enhance transparency and improve the accumulation of knowledge for the field (Kertzer and Renshon Reference Kertzer and Renshon2022). From the researcher’s perspective, it aids in the recruitment process and the study’s design, as the type of elite one has access to influences the methodological decisions, such as recruitment strategies.

To assess reporting practices, I focused on two aspects: (1) whether researchers define who they are trying to interview and (2) whether they provide a list of interviewees. The definition of elites remained ambiguous in most of the studies in my sample. In approximately 82% of the articles, no list of interviewees (including details such as occupation or role) was provided. While 7% of the articles failed to describe the elites they interviewed altogether, there was notable diversity in how elites were characterized. In many cases, elites were vaguely referred to as “political elites,” “local elites,” or “politicians.” When identified more specifically, interviewees were typically described by their occupation, including government officials, members of political parties, leaders of nongovernmental organizations, academics, or actors in state and nonstate organizations. This variation underscores the importance of clarifying the intended subjects of elite interviews rather than relying solely on ambiguous labels. Online appendix C reports a word cloud and frequency table showing the diverse ways to report elites in this sample. All data and code used to generate these materials are accessible in the replication files (Tuncel Reference Tuncel2025).

Modes of Conducting Interviews

In-person interviews are no longer considered the gold standard in the field. Advances in videoconferencing, logistical and budgetary considerations, and the potential of digital fieldwork to capture insights that traditional methods may overlook have expanded the range of interview practices (Kapiszewski, MacLean, and Smith Reference Kapiszewski, MacLean, Smith, Cyr and Goodman2024). Despite these developments, in-person interviews remain dominant in the articles analyzed, accounting for 80% of cases. Digital fieldwork methods—though increasingly recognized—were rarely used. Only 7% of studies employed alternative formats, including phone interviews (3%), phone and email (1%), online interviews (2%), and a combination of online and phone interviews (1%). Traditional fieldwork thus continues to be the prevailing mode in this dataset.

Although I expected digital fieldwork practices to dominate in recent years, my analysis did not identify a clear trend solely attributable to technological developments or the COVID-19 pandemic. Over the past few years, there has been a noticeable shift in fieldwork and publishing habits due to the pandemic.Footnote 12 However, the overall impact of these changes was not fully observable in this sample, as it only covers three postpandemic years.Footnote 13 Notably, all articles that solely relied on digital fieldwork practices were from 2023.Footnote 14

Interview Format

Interviews are typically categorized into three main formats: structured, semistructured, and unstructured, each with distinct advantages and limitations (Brinkmann Reference Brinkmann and Leavy2014). Reporting the interview format affects the depth, comparability, and nature of the information collected, ultimately shaping how findings are interpreted. About 40% of the articles explicitly specified whether their interviews were structured, semistructured, unstructured, or had any other format.

Among those that did report this information, semistructured interviews were the most common, appearing in 32% of cases. This format, which balances structure with flexibility, allows researchers to use predefined questions while also probing deeper based on interviewee responses, making it particularly useful across different subfields. In contrast, only 9% of articles employed open-ended, unstructured, or in-depth interviews, which prioritize exploratory insights over standardization. Beyond interview structure, transparency regarding interview questions was even less common. Only 10% of the articles explicitly reported the questions asked during interviews, and in each of these cases the questions were disclosed in the appendix. Notably, nearly all of these articles were published in 2018 or later, indicating a relatively recent trend toward greater methodological transparency. The widespread use of semistructured interviews reflects their adaptability in political research, where balancing consistency with the need for rich, contextual understanding is essential. However, the lack of reporting on interview format in the majority of articles raises concerns about transparency, as this methodological choice significantly impacts how data are gathered and interpreted.

Anonymization of Interviewees

An essential consideration in any research with human subjects is safeguarding respondent confidentiality. In elite interviewing, researchers often face a tension between protecting the anonymity of respondents and providing rich, detailed accounts of elites’ roles (Saunders, Kitzinger, and Kitzinger Reference Saunders, Kitzinger and Kitzinger2015). In 70% of the cases, authors anonymized interviewees by using pseudonyms. In 23% of the cases, elites’ names were revealed. In cases where the elites’ names were used, the authors rarely explain whether they got permission to do so. In one instance, Brooks (Reference Brooks and Mosley2013) justified their approach to using elites’ names on the grounds that that the interviewees were public figures and thus did not require confidentiality. As shown in figure 3, explaining anonymity decisions is not a common practice. Yet having an online appendix specific to elite interviewing correlates positively with explaining the anonymity decision (p < 0.01). In 7% of the cases, both full anonymity and using the names of the interviewees were adopted depending on the author’s ability to get permission from the interviewee. In around 88% of the papers, the authors did not openly discuss why certain decisions were made in terms of anonymity. In the rest of the cases, protecting interviewees from undue harm, interviewees’ requests to remain anonymous, maximizing the participation rate, or IRB protocol decisions were the primary reasons for anonymizing interviews.

Figure 3 Appendix Use, Anonymity Decisions, and Reporting IRB Information over the Years

Ethical Considerations

Protecting human research participants is crucial in elite interviewing (Brooks Reference Brooks and Mosley2013; Jacobs et al. Reference Jacobs, Büthe, Arjona, Arriola, Bellin, Bennett and Björkman2021). To reflect ethical concerns, I mainly looked for information about informed consent (either written or expressed orally) and ethics committees or IRBs. Informed consent was received from the participants in 17% of the cases, and this information is mainly identified in the appendix (around 90%). My analysis reveals that scholarly work in this sample rarely mentioned IRB or equivalent ethics committee information. Only 17% of the studies included IRB details. Moreover, the type of IRB information provided varied greatly; some researchers shared details about the granting institution, while others provided information about IRB approval or exemptions. Interestingly, half of the articles containing IRB information were published in 2023 and the earliest was from 2014, indicating that sharing ethical considerations is a relatively recent trend. As shown in figure 3, online appendices and the reporting of IRB information have become more common in recent years.

Among studies using qualitative methods exclusively, only two articles—one from 2021 and another from 2023—reported IRB or ethics committee approval. In these cases, approval was obtained both from a US-based institution and from institutions in the country where research was conducted. A few articles also noted that IRB details were shared with interviewees. While elite interviews are often exempt from formal IRB review, this does not diminish the importance of ethical transparency. Additionally, my analysis reveals a growing emphasis on ethical and methodological transparency in recent years (see figure 3). Notably, some mixed-methods articles reported IRB details for their quantitative components while omitting ethics information related to their qualitative interviews. Encouragingly, the inclusion of online appendices is positively associated with improved ethics reporting: articles with appendices are significantly more likely to include details on informed consent, IRB approval, or other ethical review processes (p < 0.01).

Data-Sharing Practices

Over the past decade, qualitative and mixed-methods scholars have gained unprecedented opportunities to make their data accessible through online repositories and data-archiving tools (Elman, Kapiszewski, and Vinuela Reference Elman, Kapiszewski and Vinuela2010; Jacobs, Kapiszewski, and Karcher Reference Jacobs, Kapiszewski and Karcher2022; Kapiszewski and Wood Reference Kapiszewski and Wood2022). The “replication crisis,” which has deeply affected disciplines in the social sciences as well as psychology and medicine (Ioannidis Reference Ioannidis2005), has reshaped expectations around replication and reproducibility (King Reference King1995). While data sharing is well established in quantitative political science, its role in qualitative research is more complex due to ethical and practical considerations. Nevertheless, practices such as active citation, in which we can link specific claims in a text to underlying evidence via annotations (Moravcsik Reference Moravcsik2010); the Annotation for Transparent Inquiry (ATI) approach, which embeds these annotations into digital publications to provide context and source material (Kapiszewski and Karcher Reference Kapiszewski and Karcher2021a); and repository-based data sharing have emerged as key strategies for increasing transparency in qualitative research.

Despite these advancements, data sharing remains rare in this sample. Notable exceptions include Mangonnet and Murillo (Reference Mangonnet and Murillo2020), who employed active citation by linking to resources on the primary author’s website, and Mayka (Reference Mayka2021), who used the Qualitative Data Repository to provide annotated materials for readers. While such efforts enhance transparency, qualitative researchers must carefully navigate ethical challenges, particularly in fieldwork settings (Fujii Reference Fujii2012). When possible, sharing qualitative data offers multiple benefits: it not only clarifies the research process from design to publication but also aids novice scholars in understanding qualitative methodologies. This is particularly relevant in the US context, where qualitative research education and ethics training often fail to meet current demands (Emmons and Moravcsik Reference Emmons and Moravcsik2020; Knott Reference Knott2019). By making data collection processes more transparent, scholars can contribute to the development of methodological skills among graduate students and early-career researchers.

Data sharing in qualitative research presents significant challenges. Preparing interview transcripts or audio recordings for archiving is time consuming and involves multiple steps, including securing consent, anonymizing sensitive content, and formatting materials for accessibility. These logistical and ethical demands can be especially taxing for solo researchers or those without strong institutional support. Despite these barriers, discussion of data sharing remains rare in the articles I analyzed, and making interview data available continues to be the exception rather than the norm. Only two articles, by Buckley (Reference Buckley2016) and Weeks (Reference Weeks2018), stated in their appendices that transcripts or recordings were available upon request. Three others, by Jones (Reference Jones2019), Mir (Reference Mir2018), and Shesterinina (Reference Shesterinina2016), explicitly noted that sharing was not possible due to IRB restrictions or topic sensitivity. Notably, all five articles were published in recent years, suggesting a gradually emerging attention to the ethics and logistics of qualitative data sharing.

The Use of Online Appendices

The use of methodological appendices in qualitative research is essential for enhancing rigor and transparency (Bleich and Pekkanen Reference Bleich, Pekkanen and Mosley2013). Some journals, including those analyzed in this study, have explicitly encouraged authors to provide supplementary materials detailing their qualitative methods. For instance, the submission guidelines for Comparative Political Studies recommend that researchers include additional documentation, “particularly if they employ qualitative analysis” (Comparative Political Studies 2025). Similarly, in 2022, the editors of American Political Science Review (APSR) acknowledged the constraints imposed by word limits, which had become increasingly problematic with the rise in qualitative submissions. To address this, they proposed using Harvard’s Dataverse for online appendices (APSR Editors 2022). Additionally, APSR editors announced the removal of word-count limits, a change aimed at facilitating the publication of qualitative and mixed-methods research articles, which tend to be longer (APSR Editors 2024).Footnote 15

Given these constraints, qualitative researchers have turned to methodological appendices to provide additional details about the content and conduct of elite interviews. As shown in figure 3, supplementary materials are becoming increasingly common. However, while half of the articles included a quantitative appendix, only 30% had an appendix specifically addressing qualitative elite interviewing. The content of these qualitative appendices varied widely. Some provided only minimal details, while others offered extensive documentation on the elite interviewing approach. One exemplary case is Bush, Donno, and Zetterberg’s (Reference Bush, Donno and Zetterberg2024) appendix, which outlines their compliance with the APSA Council’s Principles and Guidance for Human Subjects Research. Their supplementary materials address key ethical considerations, including power dynamics, consent, deception, harm, trauma, confidentiality, impact, IRB protocols, compensation, and shared responsibility. Additionally, they include details about the interview guide, sample frame, response rate, saturation, interview format, recording method, and a list of interviewees, demonstrating a best-practice model for transparency in qualitative research.

More importantly, the presence of an online appendix was positively and significantly correlated with the disclosure of interviewee lists, informed consent procedures (written or oral), anonymity decisions, and IRB information (p < 0.01). The inclusion of detailed supplementary materials enhances the transparency and reproducibility of qualitative research by providing comprehensive insights into the research process and ethical considerations. By fostering greater trust and credibility, methodological appendices play a crucial role in upholding rigorous academic standards in qualitative political science.

Data Transparency Expectations of Journals

The variation in reporting practices raises an important question: to what extent do journal policies shape transparency in qualitative research? To explore this, I examined the submission guidelines of the journals included in this meta-analysis to determine whether they provide specific instructions on qualitative data reporting and sharing. As of November 2024, nearly all the journals (nine out of 13) had explicit expectations regarding the use of appendices. Additionally, these journals outlined general policies for research involving human subjects, covering key aspects such as informed consent, confidentiality, data access, and adherence to ethical review board requirements.

However, only three of the 13 journals provided specific guidance on handling qualitative data. For instance, the Journal of Peace Research asks authors to include a detailed account of “data collection, ethics, and analysis” within their articles. Furthermore, when legally and ethically feasible, the journal encourages authors to create an online archive of interview transcripts, oral histories, or other hard-to-access materials to enhance research transparency (Journal of Peace Research 2025). The Journal of Politics similarly emphasizes reproducibility and replicability, requiring that both qualitative and quantitative analyses be replicable for a paper to be accepted (Journal of Politics 2025). Comparative Political Studies explicitly instructs authors to provide online-only appendices detailing their qualitative data-gathering process (Comparative Political Studies 2025).Footnote 16 Notably, the few articles in my dataset that explicitly mentioned data availability (Buckley Reference Buckley2016; Weeks Reference Weeks2018) or used active citation (Mangonnet and Murillo Reference Mangonnet and Murillo2020) were published in Comparative Political Studies. This suggests that journal policies may play a role in encouraging transparency, though broader adoption of such guidelines across political science journals remains an ongoing challenge.

Concluding Remarks

This article provides a comprehensive analysis of elite interviewing within the field of political science, shedding light on its prevalence, methodologies, and reporting practices. Elite interviews have long been a vital tool for uncovering insights into political processes, yet their use in academic research has not been systematically explored. Through an original dataset that includes studies published in major political science journals, this article assesses current trends in elite interviewing and examines how well these studies adhere to established reporting standards.

I found that elite interviewing research is commonly featured in subfield journals, often as part of a mixed-methods approach.Footnote 17 While less pronounced in international relations and American politics, CP scholars heavily rely on elite interviewing. Researchers using elite interviews employ various sampling strategies and interview methods, reflecting the diverse definitions of elites within the field. Although only a small subset of articles rely exclusively on elite interviews, which is unlikely to change given broader methodological trends, there are encouraging signs of progress. However, despite its significance, elite interviewing remains an area where methodological reporting practices vary considerably. Expectations across journals for how interview-based research should be presented are not always consistent, and authors may devote limited space to explaining decisions around interview design and implementation. This may reflect, in part, enduring perceptions that place greater value on quantitative rigor, which can shape how qualitative work is evaluated and reported. Although this study does not compare elite interview reporting with other qualitative methods, the lack of consistency across articles raises concerns about the future of transparency in political science (Elman, Kapiszewski, and Lupia Reference Elman, Kapiszewski and Lupia2018). Key details, such as sampling decisions, ethical protocols, and interview procedures, are often underreported, leaving readers unable to fully assess the credibility or replicability of the research.

These issues undermine the value of elite interviewing as a rigorous methodological approach. However, in recent years, a growing number of studies have included detailed online appendices that disclose interview protocols, sampling strategies, and ethical considerations. Articles that include online appendices are significantly more likely to report on anonymity decisions, ethics, and other transparency practices. Some journals have also begun to encourage or require qualitative data supplements, signaling a broader cultural shift toward greater transparency. That said, the field still struggles with consistently reporting core elements of the interview process. The improvements observed are uneven and often depend on individual authors, subfield norms, and journal expectations. Thus, while the overall trend points toward more openness, much work remains to be done.

These findings suggest that better documentation, particularly through supplementary materials, can address concerns over methodological rigor (Bleich and Pekkanen Reference Bleich, Pekkanen and Mosley2013; Kapiszewski and Karcher Reference Kapiszewski and Karcher2021b). More broadly, the increased use of transparency tools and greater attention to ethics reflect a slow but meaningful shift toward higher standards in qualitative research. Still, persistent gaps, such as omitted sample sizes or vague methodological descriptions, highlight the uneven adoption of these evolving norms. Elite interviewing thus offers a valuable insight into how the discipline is navigating growing demands for transparency, accountability, and best practices in qualitative inquiry.

One significant finding from this meta-analysis, which extends beyond elite interviewing research, is that using appendices significantly enhances the depth of information researchers provide. Journal submission guidelines and recent developments, such as ATI, have heightened our awareness of best practices for conveying research findings. Given this evolving landscape, it is clear that the responsibility for robust reporting practices is shared by researchers, journals, and the broader academic community. Drawing from the insights in this meta-analysis, I offer two interlinked practical recommendations aimed at benefiting researchers, reviewers, and journal editors in enhancing reporting practices for both elite interviewing and qualitative research more broadly. The first recommendation is not new but builds on Bleich and Pekkanen’s (Reference Bleich, Pekkanen and Mosley2013) work by advocating the use of appendices to provide detailed information about the interview process. These can improve transparency, enabling readers to assess the quality and depth of the evidence. To support greater consistency across the discipline, I also propose a set of minimum reporting standards that should be included in the appendices of all studies relying on elite interviews: (1) justification for the sample size sampling and recruitment strategies in relation to research goals, (2) the structure and mode of interviews, (3) informed consent procedures and ethical review (e.g., via IRB), and (4) any measures for confidentiality and anonymization. These baseline practices do not require a particular epistemological orientation, yet their inclusion greatly enhances the clarity, rigor, and credibility of qualitative work. While I do not advocate for mandatory detailed appendices, I emphasize that such documentation—particularly when addressing the above elements—can be especially valuable for graduate students, journal reviewers, and scholars new to elite interviewing.

My second suggestion is to adopt ATI and active citation when it is feasible (Kapiszewski and Karcher Reference Kapiszewski and Karcher2021a; Moravcsik Reference Moravcsik2010). These practices go beyond creating appendices by embedding transparency directly into the research process. ATI makes the research process more transparent by allowing researchers to annotate their qualitative data, providing context and explanations.Footnote 18 The active citation system enables readers to verify and understand research findings by linking citations to the underlying data and analysis. These methods can significantly enhance reporting practices by making the research process more accessible and verifiable. The contributions of these practices to elite interview research include enhanced transparency, improved verifiability, and the facilitating of replication. However, implementing ATI and active citation can be resource intensive and require significant effort from researchers. Additionally, there may be ethical and practical constraints on sharing certain types of data, particularly in elite interview research where confidentiality and anonymity are paramount. While these methods significantly enhance transparency and verifiability, balancing these advancements with practical and ethical considerations is crucial.

I also encourage qualitative scholars to engage more actively with ethics committees or IRBs, as doing so can support better planning for transparency and facilitate the inclusion of methodological details in online appendices (Tuncel Reference Tuncel2024). However, IRB approval should be considered a baseline, not a comprehensive ethical safeguard. As Fujii (Reference Fujii2012) emphasizes, ethical responsibilities in qualitative research extend beyond formal review and must be negotiated throughout the research process. This includes treating informed consent as ongoing, being attentive to power dynamics, and responding adaptively to emerging dilemmas in the field. Engaging with ethics committees early in the research process is particularly beneficial, as it encourages thoughtful planning for consent, data storage, and anonymization. These committees can provide a structured approach to anticipating ethical challenges and ensuring responsible research practices. Nevertheless, their effectiveness hinges on a deep understanding of the unique needs of qualitative fieldwork. Researchers should be transparent about the methodological nuances of their projects, such as the iterative nature of semistructured interviews or the challenges of anonymizing elite data, to receive the most relevant and tailored guidance. At the same time, not all scholars have access to robust ethics infrastructure, and committees may lack expertise in qualitative fieldwork. Addressing these institutional and disciplinary gaps is crucial for fostering ethical and transparent research practices.

While this study provides valuable insights into the practice of elite interviewing in political science, it has several limitations. First, the analysis does not compare elite interviewing with other forms of qualitative data collection—such as non-elite interviews, focus groups, or participant observation—which may yield different results and offer additional context for understanding the role of elite interviewing in the field. Many of the recommendations offered here may be broadly applicable to qualitative interviews, yet elite interviewing poses unique challenges, such as restricted access to participants and heightened confidentiality concerns, that warrant special attention in reporting practices and ethical concerns. Unlike non-elite interviews, where rapport and disclosure may be more easily established, elite interviews often require more strategic preparation. These dynamics not only shape the interview process itself but also have important implications for how findings are reported.

Second, my dataset excludes books and regional journals, meaning it misses part of the academic production that incorporates elite interviewing. As such, claiming comprehensive coverage of elite interviewing practices would be premature. These limitations may reduce the generalizability of the findings. However, the scope of this study is broad and, in many ways, the most comprehensive to date, providing valuable insights despite the challenges of covering the entire field within the constraints of time and resources.

Third, the impact of COVID-19 on publishing trends remains inconclusive. While the pandemic likely influenced research practices and publication patterns, the current dataset does not offer enough evidence to draw definitive conclusions regarding its impact on elite interviewing specifically. It is worth noting, however, that the data presented here extend through 2023, a period during which research practices and publication trends appear to have stabilized in a positive direction, as evidenced by the growing adoption of data sharing and appendices. Future research should continue to explore these areas to offer a more comprehensive understanding of the broader context in which elite interviewing operates. Further studies on qualitative interviewing trends should examine how data decisions, topics, and approaches are prioritized in this field. Understanding whether these research programs focus on questions and issues distinct from those commonly featured in mainstream, US-based political science journals will help to determine if unique perspectives and contributions are being overlooked in the broader academic discourse.

Supplementary material

To view supplementary material for this article, please visit http://doi.org/10.1017/S1537592725104040.

Data replication

Data replication sets are available in Harvard Dataverse at: https://doi.org/10.7910/DVN/CDXO5O

Acknowledgments

For valuable feedback on previous versions of this article, I express my sincere gratitude to the Emerging Methodologists Workshop and my workshop mentor, Juan Masullo. I am deeply grateful to Diana Kapiszewski and Hillel David Soifer for organizing this workshop and to Erik Bleich for providing extensive feedback. I also thank the participants at the APSA 2024 preconference for their thoughtful contributions, particularly the other presenters Eun-A Park, Irene Morse, Shagun Gupta, Salih Noor, and Ulaș Erdoğdu; mentors Agustina Giraudy, Danielle Lupton, Erik Bleich, Lauren M. MacLean, and Michelle Jurkovich; and participant Marcus Walton. This work benefited from insightful discussions and support provided during these events. Finally, I would like to thank the three anonymous reviewers for their exceptionally detailed engagement with the article.

Footnotes

1 It is also pertinent to note that diversifying the representation and methodological outlook of major political science journals has become a concern for editorial teams. For instance, in 2022, American Political Science Review editors strove to create more inclusive space for qualitative research in the field (APSR Editors 2022).

2 Institutions such as the Inter-university Consortium for Political and Social Research (https://www.icpsr.umich.edu) at the University of Michigan, the Qualitative Data Repository (https://qdr.syr.edu) at Syracuse University, and the Odum Institute for Research in Social Science (https://odum.unc.edu) at the University of North Carolina at Chapel Hill allow researchers to share their qualitative research data. In response to efforts laid out by Haven, Errington et al. (Reference Haven, Errington, Gleditsch, van Grootel, Jacobs, Kern, Piñeiro, Rosenblatt and Mokkink2020), the Open Science Framework has encouraged qualitative researchers to preregister and share pre-analysis (see also Haven, Rosenblatt et al. Reference Haven, Rosenblatt, Pineiro and Kern2020).

3 Yet in recent years there have been several attempts to define what constitutes “good” qualitative inquiry in our field. Initiatives such as the Qualitative and Multi-Method Research section of the American Political Science Association (APSA), the Qualitative Transparency Deliberations (Jacobs et al. Reference Jacobs, Büthe, Arjona, Arriola, Bellin, Bennett and Björkman2021) and the Data Access and Research Transparency (Jacoby, Ishiyama, Gaines et al. Reference Jacoby, Ishiyama, Gaines, Golder, Ansell, Samuels and Hartzell2015) programs led by political science journal editors (Jacoby, Ishiyama, Golder et al. Reference Jacoby, Ishiyama, Golder, Ansell, Samuels, Hartzell and Björkdal2015), the APSA Council’s Principles and Guidance for Human Subjects Research (APSA Human Subjects Research Ad Hoc Committee 2020), Annotation for Transparent Inquiry (Kapiszewski and Karcher Reference Kapiszewski and Karcher2021a), and the active citation approach (Moravcsik Reference Moravcsik2010) are some of these important professional efforts. These initiatives promote greater transparency and accountability in qualitative research. Yet I must note that transparency does not necessarily translate into “best practices” or “good” qualitative research. This is just one part of the debate.

4 It is important to note that reproducibility does not equate to replicability.

5 Various types of interview research are used in political science, but my focus is specifically on cases where researchers intentionally and explicitly adopt “elite interviewing” as their approach. Additionally, I focus exclusively on qualitative interviewing methods, excluding studies where elites are involved in surveys or survey experiments. Including the latter would shift the methodological focus of this study.

6 I acknowledge that citation metrics have limitations and that ranking criteria vary across different schemes. Yet overlaps between different rankings reflect some consensus in the discipline. Online appendix A provides more details on journal selection and metrics, article selection, and data coding.

7 Like any other meta-analysis approach, I refrained from adding journals with a regional or political methodological focus to account for trends in general political science research. However, I acknowledge that this exclusion could skew my findings as these journals might be more likely to include elite interviewing as a tool. To assess the potential impact of this limitation, I conducted a series of supplementary analyses across a broader range of journals, including those with a CP focus. As detailed in online appendix B, the patterns observed in these additional sources closely mirror those in the original dataset, lending further support to the generalizability of the main findings.

8 For a broader discussion of this trend, see also Collier and Elman (Reference Collier, Elman, Box-Steffensmeier, Brady and Collier2008), Gerring (Reference Gerring2017), Mahoney and Goertz (Reference Mahoney and Goertz2006), and Yanow (Reference Yanow2003).

9 The heavy focus on elite interviewing in CP is further supported by Ellinas (Reference Ellinas2023), who shows that interviews in this subfield are often used as a primary source of evidence. However, despite recent efforts, interview research still holds a “secondary status” in CP and has not been fully revitalized (Ellinas Reference Ellinas2023).

10 Online appendix C provides further journal- and article-level information to account for the distribution of articles in detail.

11 Cammett and Kendall (Reference Cammett and Kendall2021) created this list of topics based on their analysis of Middle Eastern and North African countries in political science journals. I borrowed this list of categories and adapted it as necessary to cover research topics in my dataset.

12 For instance, certain journals received a substantial number of submissions during the pandemic, while female scholars published less than their male counterparts in the first year of the pandemic (Stockemer and Reidy Reference Stockemer and Reidy2024).

13 In one exception, Musil (Reference Musil2024) discussed the impact of COVID-19 on the research and explained why the author conducted some in-person and online interviews.

14 This trend might be influenced by various factors, including technological advancements and evolving research methodologies. Additionally, lengthy review processes can create a backlog in publishing trends, preventing us from observing this trend in this sample. Researchers may also refrain from reporting specific influences on their projects, considering them irrelevant to the core findings or fearing they could detract from the academic rigor of their research.

15 Since these developments are quite recent, their impact is unlikely to be reflected in my dataset.

16 I also contacted editors from these journals, who emphasized that the goal of these policies is to enhance rigor and reproducibility in qualitative research. Some also noted that further discussions on strengthening ethical standards and transparency in qualitative research are ongoing.

17 This is not surprising, given that empirical political science has relied more on quantitative approaches since the early 2000s (Wilson Reference Wilson2017).

18 One potential limitation of ATI and active citation is the perception that these practices are time consuming. However, Elman et al. (Reference Elman, Kapiszewski, Moravcsik and Karcher2017) report that researchers generally find these practices rewarding rather than burdensome. It is important to note that repository fees (such as those for the Qualitative Data Repository) can pose a significant limitation for researchers who lack the necessary resources. Another related practice is citing the source and, where relevant, the precise page number or page ranges. This approach enhances transparency by allowing readers to directly locate the specific information or argument referenced in the source material. Incorporating these practices into the research process models a commitment to accessibility and rigor.

References

Aberbach, Joel D., and Rockman, Bert A.. 2002. “Conducting and Coding Elite Interviews.” PS: Political Science & Politics 35 (4): 673–76. DOI: 10.1017/S1049096502001142.Google Scholar
APSA Human Subjects Research Ad Hoc Committee. 2020. “Principles and Guidance for Human Subjects Research.” Washington, DC: APSA. https://connect.apsanet.org/hsr/principles-and-guidance. Retrieved August 5, 2025.Google Scholar
APSR Editors. 2022. “Publishing Your Qualitative Manuscript in the APSR.” Cambridge Core blog, March 3. Cambridge, UK: Cambridge University Press. https://www.cambridge.org/core/blog/2022/03/03/publishing-your-qualitative-manuscript-in-the-apsr. Retrieved August 5, 2025.Google Scholar
APSR Editors. 2024. “Transition at the APSR.” APSR News, May 28. Cambridge, UK: Cambridge University Press. https://www.cambridge.org/core/journals/american-political-science-review/announcements/news/transition-at-the-apsr. Retrieved August 5, 2025.Google Scholar
Beckmann, Matthew N. 2010. Pushing the Agenda: Presidential Leadership in US Lawmaking, 1953–2004. Cambridge, UK: Cambridge University Press. DOI: 10.1017/cbo9780511845154.10.1017/CBO9780511845154CrossRefGoogle Scholar
Bennett, Andrew, and Elman, Colin. 2007. “Qualitative Methods: The View from the Subfields.” Comparative Political Studies 40 (2): 111–21. DOI: 10.1177/0010414006296344.10.1177/0010414006296344CrossRefGoogle Scholar
Bentancur, Verónica Pérez, Rodríguez, Rafael Piñeiro, and Rosenblatt, Fernando. 2021. “Using Pre-Analysis Plans in Qualitative Research.” Qualitative and Multi-Method Research 19 (1): 913. DOI: 10.5281/zenodo.5495552.Google Scholar
Berry, Jeffrey M. 2002. “Validity and Reliability Issues in Elite Interviewing.” PS: Political Science & Politics 35 (4): 679–82. DOI: 10.1017/S1049096502001166.Google Scholar
Bleich, Erik, and Pekkanen, Robert. 2013. “How to Report Interview Data.” In Interview Research in Political Science, ed. Mosley, Layna, 84106. Ithaca, NY: Cornell University Press. DOI: 10.7591/9780801467974-007.Google Scholar
Brinkmann, Svend. 2014. “Unstructured and Semi-Structured Interviewing.” In The Oxford Handbook of Qualitative Research, ed. Leavy, Patricia, 277–99. Oxford, UK: Oxford University Press. DOI: 10.1093/oxfordhb/9780199811755.013.030.Google Scholar
Brooks, Sarah M. 2013. “The Ethical Treatment of Human Subjects and the Institutional Review Board Process.” In Interview Research in Political Science, ed. Mosley, Layna, 4566. Ithaca, NY: Cornell University Press. DOI: 10.7591/9780801467974-005.Google Scholar
Buckley, David. 2016. “Demanding the Divine? Explaining Cross-National Support for Clerical Control of Politics.” Comparative Political Studies 49 (3): 357–90. DOI: 10.1177/0010414015617964.10.1177/0010414015617964CrossRefGoogle Scholar
Bush, Sarah Sunn, Donno, Daniela, and Zetterberg, Pär. 2024. “International Rewards for Gender Equality Reforms in Autocracies.” American Political Science Review 118 (3): 11891203. DOI: 10.1017/S0003055423001016.10.1017/S0003055423001016CrossRefGoogle Scholar
Cammett, Melani, and Kendall, Isabel. 2021. “Political Science Scholarship on the Middle East: A View from the Journals.” PS: Political Science & Politics 54 (3): 448–55. DOI: 10.1017/S1049096520002061.Google Scholar
Carnegie, Allison, and Carson, Austin. 2020. Secrets in Global Governance: Disclosure Dilemmas and the Challenge of International Cooperation. Cambridge, UK: Cambridge University Press. DOI: 10.1017/9781108778114.10.1017/9781108778114CrossRefGoogle Scholar
Clark, Tom S. 2010. The Limits of Judicial Independence. Cambridge, UK: Cambridge University Press. DOI: 10.1017/cbo9780511761867.10.1017/CBO9780511761867CrossRefGoogle Scholar
Closa, Carlos. 2021. “Planning, Implementing and Reporting: Increasing Transparency, Replicability and Credibility in Qualitative Political Science Research.” European Political Science 20 (2): 270–80. DOI: 10.1057/s41304-020-00299-2.10.1057/s41304-020-00299-2CrossRefGoogle Scholar
Collett, Clementine. 2024. “The Hustle: How Struggling to Access Elites for Qualitative Interviews Alters Research and the Researcher.” Qualitative Inquiry 30 (7): 555–67. DOI: 10.1177/10778004231188054.10.1177/10778004231188054CrossRefGoogle Scholar
Collier, David, and Elman, Colin. 2008. “Qualitative and Multimethod Research: Organizations, Publication, and Reflections on Integration.” In The Oxford Handbook of Political Methodology, eds. Box-Steffensmeier, Janet M., Brady, Henry E., and Collier, David, 779–95. Oxford, UK: Oxford University Press. DOI: 10.1093/oxfordhb/9780199286546.003.0034.Google Scholar
Comparative Political Studies . 2025. “Submission Guidelines.” Thousand Oaks, CA: Sage Journals. https://journals.sagepub.com/author-instructions/CPS. Retrieved August 5, 2025.Google Scholar
Cramer, Katherine J., and Toff, Benjamin. 2017. “The Fact of Experience: Rethinking Political Knowledge and Civic Competence.” Perspectives on Politics 15 (3): 754–70. DOI: 10.1017/S1537592717000949.10.1017/S1537592717000949CrossRefGoogle Scholar
Ellinas, Antonis A. 2023. “The Interview Method in Comparative Politics: The Process of Interviewing Far-Right Actors.” Government and Opposition 58 (4): 661–81. DOI: 10.1017/gov.2021.58.10.1017/gov.2021.58CrossRefGoogle Scholar
Elman, Colin, Kapiszewski, Diana, Moravcsik, Andrew, and Karcher, Sebastian. 2017. “A Guide to Annotation for Transparent Inquiry (ATI).” Version 1.0. Paper prepared for the ATI Pilot Working Group, Workshop No. 1, December 1–13.Google Scholar
Elman, Colin, Kapiszewski, Diana, and Lupia, Arthur. 2018. “Transparent Social Inquiry: Implications for Political Science.” Annual Review of Political Science 21: 2947. DOI: 10.1146/annurev-polisci-091515-025429.10.1146/annurev-polisci-091515-025429CrossRefGoogle Scholar
Elman, Colin, Kapiszewski, Diana, and Vinuela, Lorena. 2010. “Qualitative Data Archiving: Rewards and Challenges.” PS: Political Science & Politics 43 (1): 2327. DOI: 10.1017/S104909651099077X.Google Scholar
Emmons, Cassandra V., and Moravcsik, Andrew M.. 2020. “Graduate Qualitative Methods Training in Political Science: A Disciplinary Crisis.” PS: Political Science & Politics 53 (2): 258–64. DOI: 10.1017/S1049096519001719.Google Scholar
Fenno, Richard F. Jr. 1978. Home Style: House Members in Their Districts. Boston: Little, Brown.Google Scholar
Fujii, Lee Ann. 2012. “Research Ethics 101: Dilemmas and Responsibilities.” PS: Political Science & Politics 45 (4): 717–23. DOI: 10.1017/S1049096512000819.Google Scholar
Gerring, John. 2017. “Qualitative Methods.” Annual Review of Political Science 20 (1): 1536. DOI: 10.1146/annurev-polisci-092415-024158.10.1146/annurev-polisci-092415-024158CrossRefGoogle Scholar
Goldstein, Kenneth. 2002. “Getting In the Door: Sampling and Completing Elite Interviews.” PS: Political Science & Politics 35 (4): 669–72. DOI: 10.1017/S1049096502001130.Google Scholar
Gonzalez-Ocantos, Ezequiel, and Masullo, Juan. 2024. “Aligning Interviewing with Process Tracing.” Sociological Methods & Research (June). DOI: 10.1177/00491241241258229.10.1177/00491241241258229CrossRefGoogle Scholar
Haven, Tamarinde L., Rosenblatt, Fernando, Pineiro, Rafael, and Kern, Florian G.. 2020. “Qualitative Preregistration.” Center for Open Science blog, December 10. Washington, DC: Center for Open Science. https://www.cos.io/blog/qualitative-preregistration. Retrieved August 5, 2025.Google Scholar
Haven, Tamarinde L., Errington, Timothy M., Gleditsch, Kristian Skrede, van Grootel, Leonie, Jacobs, Alan M., Kern, Florian G., Piñeiro, Rafael, Rosenblatt, Fernando, and Mokkink, Lidwine B.. 2020. “Preregistering Qualitative Research: A Delphi Study.” International Journal of Qualitative Methods 19. DOI: 10.1177/1609406920976417.10.1177/1609406920976417CrossRefGoogle Scholar
Ioannidis, John P. A. 2005. “Why Most Published Research Findings Are False.” PLOS Medicine 2 (8): e124. DOI: 10.1371/journal.pmed.0020124.10.1371/journal.pmed.0020124CrossRefGoogle ScholarPubMed
Jacobs, Alan M., Kapiszewski, Diana, and Karcher, Sebastian. 2022. “Using Annotation for Transparent Inquiry (ATI) to Teach Qualitative Research Methods.” PS: Political Science & Politics 55 (1): 216–20. DOI: 10.1017/S1049096521001335.Google Scholar
Jacobs, Alan M., Büthe, Tim, Arjona, Ana, Arriola, Leonardo R., Bellin, Eva, Bennett, Andrew, Björkman, Lisa, et al. 2021. “The Qualitative Transparency Deliberations: Insights and Implications.” Perspectives on Politics 19 (1): 171208. DOI: 10.1017/S1537592720001164.10.1017/S1537592720001164CrossRefGoogle Scholar
Jacoby, William G., Ishiyama, John, Gaines, Brian J., Golder, Sona, Ansell, Ben, Samuels, David J., Hartzell, Caroline A., et al. 2015. “The Journal Editors’ Transparency Statement (JETS).” Commitment pledge and petition, updated November 24, 2015. Data Access and Research Transparency (DA-RT). https://www.dartstatement.org/2014-journal-editors-statement-jets. Retrieved August 5, 2025.Google Scholar
Jacoby, William G., Ishiyama, John, Golder, Sona, Ansell, Ben, Samuels, David J., Hartzell, Caroline A., Björkdal, Annika, et al. 2015. “Data Access and Research Transparency (DA-RT): A Joint Statement by Political Science Journal Editors.Political Science Research and Methods 3 (3): 421–21. DOI: 10.1017/psrm.2015.44.Google Scholar
Jones, Calvert W. 2019. “Adviser to the King: Experts, Rationalization, and Legitimacy.” World Politics 71 (1): 143. DOI: 10.1017/S0043887118000217.10.1017/S0043887118000217CrossRefGoogle Scholar
Journal of Peace Research . 2025. “Submissions and Enquiries.” Oslo: Peace Research Institute Oslo. https://www.prio.org/journals/jpr/submissions. Retrieved August 5, 2025.Google Scholar
Journal of Politics . 2025. “Instructions to Authors.” Chicago: University of Chicago Press Journals. https://www.journals.uchicago.edu/journals/jop/instruct. Retrieved August 5, 2025.Google Scholar
Kapiszewski, Diana. 2024. “Research Transparency in Qualitative Inquiry.” In Doing Good Qualitative Research, eds. Cyr, Jennifer and Goodman, Sara Wallace, 435–45. Oxford, UK: Oxford University Press. DOI: 10.1093/oso/9780197633137.003.0037.10.1093/oso/9780197633137.003.0037CrossRefGoogle Scholar
Kapiszewski, Diana, and Wood, Elisabeth Jean. 2022. “Ethics, Epistemology, and Openness in Research with Human Participants.” Perspectives on Politics 20 (3): 948–64. DOI: 10.1017/S1537592720004703.10.1017/S1537592720004703CrossRefGoogle Scholar
Kapiszewski, Diana, MacLean, Lauren, and Smith, Lahra. 2024. “Digital Fieldwork: Opportunities and Challenges.” In Doing Good Qualitative Research, eds. Cyr, Jennifer and Goodman, Sara Wallace, 323–36. Oxford, UK: Oxford University Press. DOI: 10.1093/oso/9780197633137.003.0028.10.1093/oso/9780197633137.003.0028CrossRefGoogle Scholar
Kapiszewski, Diana, and Karcher, Sebastian. 2021a. “Empowering Transparency: Annotation for Transparent Inquiry (ATI).” PS: Political Science & Politics 54 (3): 473–78. DOI: 10.1017/S1049096521000287.Google Scholar
Kapiszewski, Diana, and Karcher, Sebastian. 2021b. “Transparency in Practice in Qualitative Research.” PS: Political Science & Politics 54 (2): 285–91. DOI: 10.1017/S1049096520000955.Google Scholar
Kertzer, Joshua D., and Renshon, Jonathan. 2022. “Experiments and Surveys on Political Elites.” Annual Review of Political Science 25 (1): 529–50. DOI: 10.1146/annurev-polisci-051120-013649.10.1146/annurev-polisci-051120-013649CrossRefGoogle Scholar
King, Gary. 1995. “Replication, Replication.” PS: Political Science & Politics 28 (3): 444–52. DOI: 10.2307/420301.Google Scholar
King, Gary, Keohane, Robert O., and Verba, Sidney. 1994. Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton, NJ: Princeton University Press. DOI: 10.1515/9781400821211.10.1515/9781400821211CrossRefGoogle Scholar
Knott, Eleanor. 2019. “Beyond the Field: Ethics after Fieldwork in Politically Dynamic Contexts.” Perspectives on Politics 17 (1): 140–53. DOI: 10.1017/S1537592718002116.10.1017/S1537592718002116CrossRefGoogle Scholar
Leech, Beth L. 2002. “Interview Methods in Political Science.” PS: Political Science & Politics 35 (4): 663–64. DOI: 10.1017/S1049096502001117.Google Scholar
Mahoney, James, and Goertz, Gary. 2006. “A Tale of Two Cultures: Contrasting Quantitative and Qualitative Research.” Political Analysis 14 (3): 227–49. DOI: 10.1093/pan/mpj017.10.1093/pan/mpj017CrossRefGoogle Scholar
Mangonnet, Jorge, and Murillo, María Victoria. 2020. “Protests of Abundance: Distributive Conflict over Agricultural Rents during the Commodities Boom in Argentina, 2003–2013.” Comparative Political Studies 53 (8): 1223–58. DOI: 10.1177/0010414019897417.10.1177/0010414019897417CrossRefGoogle Scholar
Markiewicz, Tadek. 2024. “Talking to the State: Interviewing the Elites about What’s Not to Be Said.” International Studies Perspectives 25 (2): 265–84. DOI: 10.1093/isp/ekad013.10.1093/isp/ekad013CrossRefGoogle Scholar
Mayka, Lindsay. 2021. “The Power of Human Rights Frames in Urban Security: Lessons from Bogotá.” Comparative Politics 54 (1): 125. DOI: 10.5129/001041521X16115808641104.10.5129/001041521X16115808641104CrossRefGoogle Scholar
Mir, Asfandyar. 2018. “What Explains Counterterrorism Effectiveness? Evidence from the US Drone War in Pakistan.” International Security 43 (2): 4583. DOI: 10.1162/isec_a_00331.10.1162/isec_a_00331CrossRefGoogle Scholar
Moravcsik, Andrew. 2010. “Active Citation: A Precondition for Replicable Qualitative Research.” PS: Political Science & Politics 43 (1): 2935. DOI: 10.1017/S1049096510990781.Google Scholar
Mosley, Layna. 2013. “‘Just Talk to People’? Interviews in Contemporary Political Science.” In Interview Research in Political Science, ed. Mosley, Layna, 128. Ithaca, NY: Cornell University Press. DOI: 10.7591/9780801467974-003.10.7591/9780801467974CrossRefGoogle Scholar
Musil, Pelin Ayan. 2024. “How Opposition Parties Unite in Competitive Authoritarian Regimes: The Role of an Intermediary Party.” Democratization 31 (1): 210–32. DOI: 10.1080/13510347.2023.2260762.10.1080/13510347.2023.2260762CrossRefGoogle Scholar
Nownes, Anthony J. 2006. Total Lobbying: What Lobbyists Want (and How They Try to Get It). Cambridge, UK: Cambridge University Press. DOI: 10.1017/cbo9780511840395.10.1017/CBO9780511840395CrossRefGoogle Scholar
Mbohou, Ntienjom, Félix, Léger, and Tomkinson, Sule. 2022. “Rethinking Elite Interviews through Moments of Discomfort: The Role of Information and Power.” International Journal of Qualitative Methods 21. DOI: 10.1177/16094069221095312.Google Scholar
Richards, David. 1996. “Elite Interviewing: Approaches and Pitfalls.” Politics 16 (3): 199204. DOI: 10.1111/j.1467-9256.1996.tb00039.x.10.1111/j.1467-9256.1996.tb00039.xCrossRefGoogle Scholar
Saunders, Benjamin, Kitzinger, Jenny, and Kitzinger, Celia. 2015. “Anonymising Interview Data: Challenges and Compromise in Practice.” Qualitative Research 15 (5): 616–32. DOI: 10.1177/1468794114550439.10.1177/1468794114550439CrossRefGoogle ScholarPubMed
Shesterinina, Anastasia. 2016. “Collective Threat Framing and Mobilization in Civil War.” American Political Science Review 110 (3): 411–27. DOI: 10.1017/S0003055416000277.10.1017/S0003055416000277CrossRefGoogle Scholar
Small, Mario Luis, and Calarco, Jessica McCrory. 2022. Qualitative Literacy: A Guide to Evaluating Ethnographic and Interview Research. Oakland, CA: University of California Press. DOI: 10.1525/9780520390676.Google Scholar
Stockemer, Daniel, and Reidy, Theresa. 2024. “Introduction: Pandemic and Post-Pandemic Publication Patterns in Political Science.” PS: Political Science & Politics 57 (3): 403–7. DOI: 10.1017/S1049096523001051.Google Scholar
Tuncel, Ozlem. 2024. “Talking to Elites: A Guide for Novice Interviewers.” Qualitative and Multi-Method Research 22 (1): 5360. DOI: 10.5281/zenodo.11097657.Google Scholar
Tuncel, Ozlem. 2025. “Replication Data for: Elite Interviewing in Political Science: A Meta-Analysis of Reporting Practices.” Harvard Dataverse. DOI: 10.7910/DVN/CDXO5O.Google Scholar
Weeks, Ana Catalano. 2018. “Why Are Gender Quota Laws Adopted by Men? The Role of Inter- and Intraparty Competition.” Comparative Political Studies 51 (14): 1935–73. DOI: 10.1177/0010414018758762.10.1177/0010414018758762CrossRefGoogle Scholar
Wilson, Matthew Charles. 2017. “Trends in Political Science Research and the Progress of Comparative Politics.” PS: Political Science & Politics 50 (4): 979–84. DOI: 10.1017/S104909651700110X.Google Scholar
Yanow, Dvora. 2003. “Interpretive Empirical Political Science: What Makes This Not a Subfield of Qualitative Methods.Qualitative Methods 1 (2): 913. DOI: 10.5281/zenodo.998761.Google Scholar
Figure 0

Table 1 Number of Interviews in Journals (2000–23)

Figure 1

Figure 1 Distribution of Articles Using Elite Interviews over the Years

Figure 2

Table 2 Topics Covered in Elite Interviews

Figure 3

Figure 2 Regional Distribution of Elite InterviewingNotes: “Global” refers to cases where the research topic involved several countries in different continents.

Figure 4

Table 3 Number of Interviews Conducted

Figure 5

Figure 3 Appendix Use, Anonymity Decisions, and Reporting IRB Information over the Years

Supplementary material: File

Tuncel supplementary material

Tuncel supplementary material
Download Tuncel supplementary material(File)
File 1.3 MB
Supplementary material: Link

Tuncel Dataset

Link