We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Submitted manuscripts usually have an arduous journey, while also having the potential to make significant contributions that reach wide and relevant audiences. In this chapter, I offer a path and guidelines for journal submissions; this includes both the editor’s perspective on handling submitted manuscripts and implications for the authors. Although journals may vary in how manuscripts are handled, the following three main phases most likely occur in some form: (1) submissions are screened to determine their appropriateness for a journal; (2) manuscripts that remain after screening are usually assigned to reviewers by the editor or associate editor; and (3) manuscripts that remain after the review process are accepted and published. I’m hopeful that the information will be helpful to editors and authors by elucidating the process of handling submitted manuscripts and improving the chances of successful and productive contributions.
The social world is fascinating – full of complexities, tensions, and contradictions. Social scientists have long been interested in better understanding the social world around us. Unlike quantitative research, that focuses on collecting and analyzing numerical data to make statistical inferences about the social world, qualitative research contributes to empirical and theoretical understandings of society by examining and explaining how and why people think and act as they do through the use of non-numerical data. In other words, qualitative research uncovers social processes and mechanisms undergirding human behavior. In this chapter, we will discuss how to design a qualitative research project using two of the most common qualitative research methods: in-depth interviewing and ethnographic observations (also known as ethnography or participant observation). We will begin the chapter by discussing the what, how, and why of interviewing and ethnography. We will then discuss the importance of interrogating one’s underlying ontological and epistemological assumptions regarding research (and the research process) and the steps to follow in designing a qualitative study. We conclude the chapter by reviewing the different elements to consider when developing a qualitative research project.
This chapter provides an overview of exploratory factor analysis (EFA) from an applied perspective. We start with a discussion of general issues and applications, including definitions of EFA and the underlying common factors model. We briefly cover history and general applications. The most substantive part of the chapter focuses on six steps of EFA. More specifically, we consider variable (or indicator) selection (Step 1), computing the variance–covariance matrix (Step 2), factor-extraction methods (Step 3), factor-retention procedures (Step 4), factor-rotation methods (Step 5), and interpretation (Step 6). We include a data analysis example throughout (with example code for R), with full details in an online supplement. We hope the chapter will provide helpful guidance to applied researchers in the social and behavioral sciences.
Peer review supports decisions related to publications, grant proposals, awards, or personnel selection. Independent of the specific occasion, we propose validity as a chief evaluation criterion for reviews. While applicable to all occasions, the principles of validity-oriented quality control are particularly suited for journal reviews. Beyond evaluating validity and the scientific potential of a given piece of research, we address how peer reviewing serves important functions and is accountable for the growth of science at a more superordinate level. We also provide guidelines and concrete recommendations for how a good peer review may serve these functions. Good peer review, thereby, fosters both the advancement of scientific research and the quality, precision, and sincerity of the scientific literature. The end of the chapter is devoted to a core set of good reviewer practices, conceived as an essential feature of academic culture.
Bayes factors – evidence for one model versus another – are a useful tool in the social and behavioral sciences, partly because they can provide evidence for no effect relative to the sort of effect expected. By contrast, a non-significant result does not provide evidence for the null hypothesis tested. If non-significance does not in itself count against any theory predicting an effect, how could a theory fail a test? Bayes factors provide a measure of evidence from first principles. A severe test is one that is likely to obtain evidence against a theory if it were false – to obtain an extreme Bayes factor against the theory. Bayes factors show why cherry picking degrades evidence, how to deal with multiple testing, and how optional stopping is consistent with severe testing. Further, informed Bayes factors can be used to link theory tightly to how that theory is tested, so that the measured evidence does relate to the theory.
Experimental research designs feature two essential ingredients: manipulation of an independent variable and random assignment of subjects. However, in a quasi-experimental design, subjects are assigned to groups based on non-random criteria. This design allows for manipulation of the independent variable with the aim of examining causality between an intervention and an outcome. In social and behavioral research, this design is useful when it may not be logistically or ethically feasible to use a randomized control design – the “gold standard.” Although not as strong as an experiment, non-equivalent control group pretest–posttest designs are usually higher in internal validity than correlation designs. Overcoming possible threats to internal and external validity in a non-equivalent control group pretest–posttest design, such as cofounding variables, are discussed in relation to sample selection, power, effect size, and specific methods of data analyses.
Social and behavioral researchers often draw on archival data – data collected by an entity other than the research team – to conduct scientific inquiry. Researchers typically seek to make claims about measured variables that extend beyond the measures themselves, such as interpreting a measure as representing an unobservable theoretical construct. Though researchers using archival data encounter many issues, this chapter focuses on two that have received less attention. The first concerns how researchers should justify the interpretations and uses they attach to archival measures. The second concerns how to justify generalizing findings. This chapter provides a framework to help researchers address these issues by drawing on contemporary validity theory in education and psychology as well as theory regarding causal mechanisms from philosophy and sociology. These concepts are illustrated using multiple examples from published studies.
The steps social and behavioral scientists take after the end of a study are just as important as the steps taken before and during it. The goal of this chapter is to discuss the practical and ethical considerations that should be addressed before participants leave the physical or virtual study space. We review several post-experimental techniques, including the debriefing, manipulation checks, attention checks, mitigating participant crosstalk, and probing for participant suspicion regarding the purpose of the study. Within this review, we address issues with the implementation of each post-experimental technique as well as best practices for their use, with an emphasis placed on prevention of validity threats and the importance of accurate reporting of the steps taken after the experiment ends. Finally, we emphasize the importance of continuing to develop and empirically test post-experimental practices, with suggestions for future research.
Cross-sectional studies are a type of observational studies in which the researcher commonly assesses the exposure, outcome, and other variables (such as confounding variables) at the same time. They are also referred to as “prevalence studies.” These studies are useful in a range of disciplines across the social and behavioral sciences. The common statistical estimates from these studies are correlation values, prevalence estimates, prevalence odds ratios, and prevalence ratios. These studies can be completed relatively quickly, are relatively inexpensive to conduct, and may be used to generate new hypotheses. However, the major limitation of these studies are biases due to sampling, length-time bias, same source bias, and the inability to have a clear temporal association between exposure and outcome in many scenarios. The researcher should be careful while interpreting the measure of association from these studies, as it may not be appropriate to make causal inferences from these associations.
Meta-analysis is a form of data synthesis that statistically combines the results of primary research studies responding to a given question. It has become an indispensable tool for decision making and advancement of knowledge in a variety of disciplines. This chapter provides an overview of this method, beginning with a brief discussion of systematic reviews – the research methodology that undergirds meta-analysis. The chapter then explores specific components of this approach as it is most widely applied in the literature, including issues related to effect sizes, heterogeneity of study outcomes, scope of the analysis, and quality-control issues to consider when conducting a meta-analysis. A brief overview of new and emerging methods for the synthesis of primary research data is also provided, highlighting different forms of meta-analysis and different approaches for the synthesis of research data. Practical examples are provided as illustrations to clarify and reinforce the concepts presented in this chapter.
A literature review is a survey of scholarly sources that establishes familiarity with and an understanding of current research in a particular field. It includes a critical analysis of the relationship among different works, seeking a synthesis and an explanation of gaps, while relating findings to the project at hand. It also serves as a foundational aspect of a well-grounded thesis or dissertation, reveals gaps in a specific field, and establishes credibility and need for those applying for a grant. The enormous amount of textual information necessitates the development of tools to help researchers effectively and efficiently process huge amounts of data and quickly search, classify, and assess their relevance. This chapter presents an assessable guide to writing a comprehensive review of literature. It begins with a discussion of the purpose of the literature review and then presents steps to conduct an organized, relevant review.
Finding one’s niche in any scientific domain is often challenging, but there are certain tips and steps that can foster a productive research program. In this chapter, we use terror management theory (TMT) as an exemplar of what designing a successful line of research entails. To this end, we present an overview of the development and execution of our research program, including testing of original hypotheses, direct and conceptual replications, identification of moderating and mediating variables, and how efforts to understand failures to replicate mortality salience effects led to important conceptual refinements of the theory. Our hope is that recounting the history of terror management theory and research will be useful for younger scholars in their own research pursuits in the social and behavioral sciences.
This chapter is for all academics, from students and faculty to professional staff at research centers and institutions. The content draws upon our experiences from when we were budding scholars, to experienced scientists, and now administrators, including time spent at federal funding agencies. Our aim is to provide information to scholars so that you can write more competitive grant proposals and secure greater resources for your research and scholarship. First, we provide a broad overview of what to consider before you embark upon writing a proposal. Then, we discuss areas for consideration in writing the proposal itself. Finally, we share steps to consider after you have received feedback about your proposal. We also provide some detail about particular funders, including support for international scholarship. As with all scholarship, persistence, collaboration, and support from colleagues are helpful for successfully securing external funding.