Hostname: page-component-848d4c4894-5nwft Total loading time: 0 Render date: 2024-06-01T05:19:08.164Z Has data issue: false hasContentIssue false

Moving from opposition to taking ownership of open science to make discoveries that matter

Published online by Cambridge University Press:  27 January 2023

Oliver Weigelt*
Affiliation:
Wilhelm Wundt Institute of Psychology, Leipzig University, Leipzig, Germany
Kimberly A. French
Affiliation:
School of Psychology, Georgia Institute of Technology, Atlanta, USA
Jessica de Bloom
Affiliation:
Faculty of Economics and Business, University of Groningen, Groningen, The Netherlands
Carolin Dietz
Affiliation:
Wilhelm Wundt Institute of Psychology, Leipzig University, Leipzig, Germany
Michael Knoll
Affiliation:
Wilhelm Wundt Institute of Psychology, Leipzig University, Leipzig, Germany
Jana Kühnel
Affiliation:
Department of Occupational, Economic and Social Psychology, University of Vienna, Vienna, Austria
Laurenz L. Meier
Affiliation:
Institute of Work and Organizational Psychology, University of Neuchâtel, Neuchâtel, Switzerland
Roman Prem
Affiliation:
Institute of Psychology, University of Graz, Graz, Austria
Shani Pindek
Affiliation:
Department of Human Services, University of Haifa, Haifa, Israel
Antje Schmitt
Affiliation:
Faculty of Economics and Business, University of Groningen, Groningen, The Netherlands
Christine J. Syrek
Affiliation:
Faculty of Business Psychology, University of Applied Sciences Bonn-Rhein-Sieg, Rheinbach, Germany
Floor Rink
Affiliation:
Faculty of Economics and Business, University of Groningen, Groningen, The Netherlands
*
*Corresponding author: Email: oliver.weigelt@uni-leipzig.de
Rights & Permissions [Opens in a new window]

Abstract

Type
Commentaries
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the Society for Industrial and Organizational Psychology

Guzzo et al. (Reference Guzzo, Schneider and Nalbantian2022) argue that open science practices may marginalize inductive and abductive research and preclude leveraging big data for scientific research. We share their assessment that the hypothetico-deductive paradigm has limitations (see also Staw, Reference Staw2016) and that big data provide grand opportunities (see also Oswald et al., Reference Oswald, Behrend, Putka and Sinar2020). However, we arrive at very different conclusions. Rather than opposing open science practices that build on a hypothetico-deductive paradigm, we should take initiative to do open science in a way compatible with the very nature of our discipline, namely by incorporating ambiguity and inductive decision-making. In this commentary, we (a) argue that inductive elements are necessary for research in naturalistic field settings across different stages of the research process, (b) discuss some misconceptions of open science practices that hide or discourage inductive elements, and (c) propose that field researchers can take ownership of open science in a way that embraces ambiguity and induction. We use an example research study to illustrate our points.

Giving vague answers to precise questions when preregistering a typical study

Imagine we were interested in whether workload during the workday affects exhaustion the next morning using experience sampling methodology (ESM). Upon filling the blanks of the preregistration template, it is straightforward to state that workload predicts higher levels of exhaustion the next day. However, the going gets tough when it comes to calculating test power and optimal sample sizes. The first task is determining our anticipated effect size. Specific ESM studies (Prem et al., Reference Prem, Paškvan, Kubicek and Korunka2018) or meta-analytic evidence (Crawford et al., Reference Crawford, LePine and Rich2010; Alarcon, Reference Alarcon2011) may provide average effect sizes of the bivariate links between workload and exhaustion. However with few exceptions (McCormick et al., Reference McCormick, Reeves, Downes, Li and Ilies2018), the meta-analyses on this topic speak to the between-person level of analysis, most studies refer to proxies of our focal measures, and random effects estimates typically suggest that a wide range of effect sizes is plausible (Brannick et al., Reference Brannick, French, Rothstein, Kiselica and Apostoloski2021). Hence, based on the literature, a calculation of the standard meta-analytic effect size is imprecise at best. Design features might also change our effect size. For example, the effect size might vary when studying workload and exhaustion with a time lag of a few hours compared to measuring constructs concurrently (Pindek et al., Reference Pindek, Arvan and Spector2019). Without knowing unique temporal effects, we can only assume that the effect sizes are similar across these two design choices. Because we plan to study changes in exhaustion from one day to the next by controlling for exhaustion, we would need coefficients from a study that has studied changes in exhaustion across a similar period—another feature that may change the anticipated effect. Given that such a study is rare or may not exist at all, we end up giving vague answers to specific questions about power analyses upon trying to preregister a typical ESM study.

Ambiguity in preregistration might also arise in trying to detail the type of analyses we will be conducting. For example, we might document planned analyses, such as (serial) mediation, a bi-factor structure, accumulation effects, parallel change, a three-level data structure, (multilevel) group comparisons, or cross-level interactions. However, if the factorial structure is not supported in our measurement model, we may need to revise the measures used for hypothesis testing. Perhaps there is too little variation within-person on our exhaustion measure, or perhaps we were unable to get enough participants to conduct our hypothesized multilevel group comparisons. The decision trees on the analyses, even for testing rather simple models, might soon appear infinite. Moreover, our decisions are still full of uncertainty, depending on the amount and quality of the data we will end up getting after study completion. Even with the clearest of intentions, a final analysis plan cannot be established without knowing the true properties of our data once it is collected. Again, we are therefore likely to give vague answers to specific preregistration questions beforehand.

Being honest about what we know in advance

For researchers in I-O psychology, preregistration tends to create a catch-22 scenario: We have to ground our study in specific sample and data handling information. Typically, however, neither existing theory nor the available empirical evidence is precise enough to justify such specific study design decisions. If precise information is well-documented, we run the risk that our planned research is less novel. We therefore propose being more honest about what we may know in advance and what we may not, and presenting the study as such in the preregistration and paper. We can simply admit that our design and analysis plans are guided by our best educated guesses of the nature of eventual data, collected within our anticipated context. Moreover, we can clearly communicate the developmental stage of our research programs in an effort to clarify the rationale for inductive and deductive approaches, and to ensure early-stage research programs are not held to the same standards of precision as more developed research programs.

We argue that opening doors explicitly for inductive elements in quantitative empirical research would improve transparency and help to refine theories—after all, inductive testing is just as much part of the research cycle as deductive testing (Hollenbeck & Wright, Reference Hollenbeck and Wright2017). Once we explicitly acknowledge how our theories are deficient, permitting ambiguity and induction can help to fill these gaps in a programmatic way (Bamberger, Reference Bamberger2019). For example relevant to ESM preregistration, inductively testing and reporting effects across different lags might advance theories that are mute regarding temporal issues (George & Jones, Reference George and Jones2000; Shipp & Cole, Reference Shipp and Cole2015). Moreover, while deductively obtained non-significant findings may illustrate that our prior scholarly thinking needs revisiting, inductive exploration of those findings allows us to examine how these views need to be refined (Spector, Reference Spector2017; Spector et al., Reference Spector, Rogelberg, Ryan, Schmitt and Zedeck2014). We contend that we will not make progress as a discipline unless we leverage the richness of information inherent in the data we analyze. Being honest about the specific gaps in our research, and the type of goals we intended to reach with this research (inductive versus deductive testing) will contribute to create the conditions open science depends on.

Overcoming misconceptions and taking ownership of open science practices

Thinking through a study to plan the approach and contingencies likely facilitates doing better research, even when there are still vague answers. However, many of the existing templates and materials rooted in basic research pressure researchers to fill each and every blank and to provide precise answers when only vague ones are possible. They may also signal that research should be exclusively deductive in nature. We would like to set these misunderstandings straight. Rather than forcing the deductive paradigm on open research in I-O psychology, we should identify and refer to templates that fit our way of doing research (for an adequate example see the PRP-QUANT template by Bosnjak et al., Reference Bosnjak, Fiebach, Mellor, Mueller, O’Connor, Oswald and Sokol2022). Even with a good fitting preregistration template, we may only be able to fill in a few blanks as exemplified above. However, providing some detail is always better than providing no information at all. Being honest about what we know and what we cannot know in advance would align well with open science. Filling out a preregistration form should not become such a constraint that researchers try to avoid it. Allowing for blanks, ranges, or educated guesses that are declared as such, can help transform the preregistration process from a time-consuming and frustrating experience to a developmental stage (Daft, Reference Daft1983) that helps researchers anticipate problems and create contingencies.

Of note, open science practices that unveil some degree of ambiguity in decision-making would need to be understood by editorial teams. In our view, allowing for induction in preregistration during the early stages of a research program may open the door to more novel and relevant research, that can lead to a robust and replicable body of evidence when refined and improved in follow-up research. Preregistration can help make this happen, by providing a platform for open communication among researchers about what they aimed to find in their studies, and ultimately observed.

Reconciling confirmation and discovery

Being aware that deductive, abductive, and inductive research complement one another is not sufficient. We need to get involved in designing, adopting, and adapting open science practices in a way that they fit the unique nature of our discipline. As I-O psychologists, we need to capitalize on the unique opportunities we have: While embracing more flexibility in open science, we can still provide evidence that is real in naturalistic field settings where not all variables are held constant. Moreover, we can quantify, rather than cancel out, the role of context. We can study phenomena that emerge in the world of work and that may not (yet) exist according to theory, but that are familiar to millions. In this way, we can move open science from confirming what we believe we already know to discoveries that matter.

References

Alarcon, G. (2011). A meta-analysis of burnout with job demands, resources, and attitudes. Journal of Vocational Behavior, 79, 549562. https://doi.org/10.1016/j.jvb.2011.03.007 CrossRefGoogle Scholar
Bamberger, P. A. (2019). On the replicability of abductive research in management and organizations: Internal replication and its alternatives. Academy of Management Discoveries, 5(2), 103108. https://doi.org/10.5465/amd.2019.0121 CrossRefGoogle Scholar
Bosnjak, M., Fiebach, C. J., Mellor, D., Mueller, S., O’Connor, D. B., Oswald, F. L., & Sokol, R. I. (2022). A template for preregistration of quantitative research in psychology: Report of the joint psychological societies preregistration task force. American Psychologist, 77(4), 602. https://doi.org/10.1037/amp0000879 CrossRefGoogle ScholarPubMed
Brannick, M. T., French, K. A., Rothstein, H. R., Kiselica, A. M., & Apostoloski, N. (2021). Capturing the underlying distribution in meta-analysis: Credibility and tolerance intervals. Research Synthesis Methods, 12(3), 264290. https://doi.org/10.1002/jrsm.1479 CrossRefGoogle ScholarPubMed
Crawford, E. R., LePine, J. A., & Rich, B. L. (2010). Linking job demands and resources to employee engagement and burnout: A theoretical extension and meta-analytic test. Journal of Applied Psychology, 95(5), 834848. https://doi.org/10.1037/a0019364 CrossRefGoogle ScholarPubMed
Daft, R. L. (1983). Learning the craft of organizational research. Academy of Management Review, 8(4), 539546. https://doi.org/10.5465/amr.1983.4284649 Google Scholar
George, J. M., & Jones, G. R. (2000). The role of time in theory and theory building. Journal of Management, 26(4), 657684. https://doi.org/10.1177/014920630002600404 CrossRefGoogle Scholar
Guzzo, R. A., Schneider, B., & Nalbantian, H. R. (2022). Open science, closed doors: The perils and potential of open science for research in practice. Industrial and Organizational Psychology: Perspectives on Science and Practice, 15, 495515.CrossRefGoogle Scholar
Hollenbeck, J. R., & Wright, P. M. (2017). Harking, sharking, and tharking: Making the case for post hoc analysis of scientific data. Journal of Management, 43(1), 518. https://doi.org/10.1177/0149206316679487 CrossRefGoogle Scholar
McCormick, B. W., Reeves, C. J., Downes, P. E., Li, N., & Ilies, R. (2018). Scientific contributions of within-person research in management: Making the juice worth the squeeze. Journal of Management, 46(2), 321350. https://doi.org/10.1177/0149206318788435 CrossRefGoogle Scholar
Oswald, F. L., Behrend, T. S., Putka, D. J., & Sinar, E. (2020). Big data in industrial-organizational psychology and human resource management: Forward progress for organizational research and practice. Annual Review of Organizational Psychology and Organizational Behavior, 7(1), 505533. https://doi.org/10.1146/annurev-orgpsych-032117-104553 CrossRefGoogle Scholar
Pindek, S., Arvan, M. L., & Spector, P. E. (2019). The stressor-strain relationship in diary studies: A meta-analysis of the within and between levels. Work and Stress, 33(1), 121. https://doi.org/10.1080/02678373.2018.1445672 CrossRefGoogle Scholar
Prem, R., Paškvan, M., Kubicek, B., & Korunka, C. (2018). Exploring the ambivalence of time pressure in daily working life. International Journal of Stress Management, 25(1), 3543. https://doi.org/10.1037/str0000044 CrossRefGoogle Scholar
Shipp, A. J., & Cole, M. S. (2015). Time in individual-level organizational studies: What is it, how is it used, and why isn’t it exploited more often? Annual Review of Organizational Psychology and Organizational Behavior, 2, 237260. https://doi.org/10.1146/annurev-orgpsych-032414-111245 CrossRefGoogle Scholar
Spector, P. E. (2017). The lost art of discovery: The case for inductive methods in occupational health science and the broader organizational sciences. Occupational Health Science, 1(1), 1127. https://doi.org/10.1007/s41542-017-0001-5 CrossRefGoogle Scholar
Spector, P. E., Rogelberg, S. G., Ryan, A. M., Schmitt, N., & Zedeck, S. (2014). Moving the pendulum back to the middle: Reflections on and introduction to the inductive research special issue of Journal of Business and Psychology. Journal of Business and Psychology, 29(4), 499502. https://doi.org/10.1007/s10869-014-9372-7 CrossRefGoogle Scholar
Staw, B. M. (2016). Stumbling toward a social psychology of organizations: An autobiographical look at the direction of organizational research. Annual Review of Organizational Psychology and Organizational Behavior, 3(1), 119. https://doi.org/10.1146/annurev-orgpsych-041015-062524 CrossRefGoogle Scholar