Hostname: page-component-77f85d65b8-8wtlm Total loading time: 0 Render date: 2026-04-10T15:02:08.935Z Has data issue: false hasContentIssue false

Where is my theoretical framework? When developmental reviewing turns into theorizing after results are known (TARKing)

Published online by Cambridge University Press:  31 March 2026

Mona Weiss*
Affiliation:
Friedrich-Schiller-University Jena, Jena, Germany
Wiebke Doden
Affiliation:
King’s Business School, King’s College London, London, UK
Mirko Antino
Affiliation:
Departamento de Psicobiologia y Metodología en Ciencias del Comportamiento, Universidad Complutense de Madrid, Madrid, España
Jan B. Schmutz
Affiliation:
Department of Psychology, University of Zurich, Zurich, Switzerland
Dana Unger
Affiliation:
Department of Psychology, UiT The Arctic University of Norway, Tromsø, Norway
*
Corresponding author: Mona Weiss; Email: mona.weiss@uni-jena.de
Rights & Permissions [Opens in a new window]

Abstract

Information

Type
Commentaries
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of Society for Industrial and Organizational Psychology

Introduction

In the spirit of further elaborating on the problematic side of developmental reviewing as defined by Allen et al. (Reference Allen, French, Avery, King and Wiernik2026), we introduce TARKing: a practice whereby researchers alter, exchange, add, or remove theoretical frameworks post hoc not only to achieve greater consistency with their findings but, more importantly, to conform to reviewer suggestions. Developmental reviewing pressure may encourage TARKing as authors are pushed to reshape their manuscripts to satisfy reviewers’ preferences, which may foster an overemphasis on compelling storytelling rather than on accuracy and transparency, while also undermining author agency and voice.

Unlike HARKing (hypothesizing after results are known), which involves reformulating hypotheses usually occurring before peer review (Byington & Felps, Reference Byington and Felps2017; Hollenbeck & Wright, Reference Hollenbeck and Wright2017), TARKing shifts the focus to drastic theoretical changes occurring as part of the social influence process during peer review. As Allen et al. also note, iterative theoretical refinement is an essential part of the review process, especially in the fields of industrial and organizational psychology (IOP) and organizational behavior (OB; e.g., Grant & Pollock, Reference Grant and Pollock2011; Sparrowe & Mayer, Reference Sparrowe and Mayer2011). However, TARKing may stretch this refinement process to the point where authors feel pressured to alter theoretical frameworks, thereby compromising the deductive logic central to hypothesis testing (Popper, Reference Popper1935). With TARKing, the theoretical framework becomes the product of the peer-review process rather than a driver of the researcher’s inquiry.

When and why TARKing may be problematic

As we began to explore TARKing in our author team, sharing our personal experiences with peer review, we realized that we all had encountered some or even multiple instances of TARKing. In these instances, many of us felt pressured that publication hinged on making drastic changes to the original theoretical framing of a paper, even adding completely new theories that have not been considered up to that point. Notably, persuasive academic storytelling and theory building are building blocks within the academic writing and publishing process, especially within OB and IOP (Grant & Pollock, Reference Grant and Pollock2011; Sparrowe & Mayer, Reference Sparrowe and Mayer2011), and we do not mean to question the fruitful learning curve that the peer review process contains. In fact, academic writing—concise, persuasive, and yet inspiring—is contingent on experts providing feedback and authors iteratively sharpening their theoretical framing and storytelling.

Yet, TARKing is problematic if it cannibalizes the deductive test logic, which argues for the importance of first deriving hypotheses and then testing them (see Popper, Reference Popper1935). This may happen during the review process in response to a reviewer or editor asking for or suggesting a more “pertinent” theory. Although this is an important process that helps authors streamline or improve their theoretical arguments and achieve greater consistency, authors may often find themselves navigating a fine line between making useful adaptations to their theoretical frameworks and in some cases completely replacing them. We believe that the “end product” of TARKing, a published article in which the theoretical framing has been drastically altered during peer review, may lead to at least four problematic outcomes.

First, theoretical consistency is eroding. Theories provide the foundation for hypothesis generation. Revising theoretical underpinnings post hoc creates a mismatch between the original research intent and its narrative, distorting the logical progression of deductive discovery. Second, transparency is lost. Transparency in science requires that hypotheses and theoretical frameworks be declared a priori. TARKing obscures this process, making it difficult to discern whether findings stem from genuine theoretical exploration or are retrofitted for publication. Third, there is an inflated perception of precision. Altering theoretical frameworks to align with results can create an illusion of precision and predictive power that does not reflect the reality of the research. This in turn, may not hold in future replications. This contributes to the replication crisis affecting many fields. Last, the accumulative nature of science is compromised. Scientific progress depends on building upon established theories. When frameworks are retrofitted to match results, trust in these theories diminishes, weakening their ability to guide future research effectively. It is important to note that some journals, such as Academy of Management Discoveries (AMD), explicitly allow for theorizing after results are known. This approach encourages researchers to explore unexpected findings transparently and build theory inductively.

Examples and scholarly debate on TARKing

We felt that it would be worthwhile to explore TARKing empirically by conducting interviews with renowned scholars in the fields of IOP, OB, and other psychological disciplines (i.e., cognitive psychology). Essentially, the findings revealed that TARKing seems to be commonplace and is indeed perceived as problematic. For example, one interviewee from the field of OB with several decades of publishing experience with top-tier journals such as the Journal of Applied Psychology and the Academy of Management Journal noted: “I have had experiences of the review team pushing me to recast theories in ways that made me uncomfortable.” Another interviewee noted: “The reviewer suggested I use a different theory, I didn’t think of this theory before, but it kind of fit and I didn’t want to risk rejection” and a further person said: “The reviewer suggested changing the label of a specific variable, which brought me to change all the literature review.”

In addition to these illustrative experiences, we also encountered recurring types of reviewer requests in our own peer review experiences. The following two examples (paraphrased here, not verbatim) illustrate how reviewer requests can extend beyond constructive feedback into pressure for theoretical conformity:

In the first example, a reviewer may note the following:

The manuscript could be considerably strengthened by drawing on [Theory X] instead of [Theory Y] (…) This framework seems particularly relevant to the phenomenon under study and may provide a more appropriate grounding than the current theoretical approach. The authors might consider reframing their arguments accordingly.

Another example likely to stipulate TARKing is the following:

It may be useful to reconsider the labeling of [Variable Y] (…) An alternative label could better capture the construct and align more closely with the existing literature. Revising this terminology throughout the manuscript and theory part would help clarify the theoretical positioning.

These examples suggest that TARKing goes beyond constructive feedback and theoretical refinement; rather, they all contain an element of pressure, fear, and conformity. Implicit in these statements is the fact that the peer review process may often be characterized by a more or less clear power imbalance between authors and the review team. Authors conform and subject themselves not only to the review team’s recommendations but also to their most extreme requests, just to “get that paper published,” which ultimately creates one or more of the four perils discussed above. A system where young researchers are under immense pressure to publish in top-tier journals to get a tenured position is further exacerbating the pressure to conform with whatever reviewers suggest.

In response to these first empirical insights confirming that TARKing is common and can be highly problematic, we organized a professional development workshop (PDW) at the 85th Annual Meeting of the Academy of Management (Weiss et al., Reference Weiss, Doden, Antino, Schmutz and Unger2025), bringing together renowned editors and researchers from different fields of OB and IOP. The aim of the PDW was to discuss the antecedents and consequences of TARKing, including how it influences the transparency, consistency, and replicability of scientific research and theory development. The invited panelists as well as voices from the audience again confirmed what our anecdotal and empirical evidence had surfaced: TARKing can indeed blur the line between legitimate theoretical refinement and practices undermining the scientific process. We discussed best practices on how to manage TARKing during peer review with the goal of creating a forward-looking agenda for improving peer-review practices, focusing on how authors, reviewers, and editors can ensure theoretical transparency and integrity.

Potential remedies against TARKing

As TARKing seems to threaten scientific rigor and research integrity, we conclude this comment by outlining possible remedies and solutions, some of them have already emerged in the PDW discussions, others are deduced from relevant literature on voice and conformity pressures.

Create a transparency paragraph

One critical starting point to reduce or even avoid TARKing is to include a transparent overview of theoretical changes to a manuscript in a paragraph. For example, the Journal of Applied Psychology and many other IOP journals already require authors to be fully transparent about their methods, data, and results. The same openness could be achieved if authors were to transparently state which theories and framing had been altered during peer review and why. Similar to Allen et al., we advocate for greater transparency in the review process, making reviewer comments, editor decision letters, and author responses available as supplemental files, and by indicating whether changes were made in response to reviewer feedback. Such openness would reduce the hidden incentives for retrofitting theory to results.

Editors and journals should establish norms

Another critical issue in the context of peer reviewing and the occurrence of TARKing that was noted in the PDW discussions is that editors should more fully assume their guiding role during peer review and set out specific norms for reviewers and authors. Some journals, such as the Journal of Business and Psychology, already implement this practice by issuing checklists to reviewers and authors that specifically speak to tone and constructive language during peer review. Editors as authorities during peer review should establish a common ground for authors and reviewers, and monitor that both parties adhere to established norms throughout the review process. These norms should differentiate acceptable and unacceptable post hoc changes, clearly communicated on the journal’s website. This would also limit unnecessarily extensive and multiple-round reviews, which have also been recently pointed out as problematic (Spector, Reference Spector2024) and that Allen and colleagues suggest avoiding until we have evidence that it harms the scientific output.

Enhance psychological safety and strengthen author voice during peer review

Echoing the above remedy, we also suggest that it is necessary to create psychological safety during the peer review process. In line with how psychological safety is defined in the team and voice literatures (Edmondson & Lei, Reference Edmondson and Lei2014), we submit that authors should feel safe in speaking up with concerns during peer review and questioning the reviewers’ suggestions. To this end, the review team, and editors in particular, should explicitly invite authors’ concerns and welcome critical responses to the requests from the review team, particularly if the required changes pertain to the theoretical framing. This can undermine potential power imbalance, perceived pressure to conform to every suggestion from an editor. We believe that these practices can help minimize TARKing by advancing research integrity, strengthening the voice of authors during peer review while furthering the critical nature of science.

Funding statement

This research has been partly funded by a grant from the Swiss National Science Foundation awarded to Jan B. Schmutz (Grant number PCEFP1_203374).

References

Allen, T. D., French, K., Avery, D. R., King, E., & Wiernik, B. M. (2026). Developmental reviewing: is it really good for science? Industrial and Organizational Psychology, 19(1), 115.Google Scholar
Byington, E. K., & Felps, W. (2017). Solutions to the credibility crisis in management science. Academy of Management Learning & Education, 16(1), 142162. https://doi.org/10.5465/amle.2015.0035.CrossRefGoogle Scholar
Edmondson, A. C., & Lei, Z. (2014). Psychological safety: The history, renaissance, and future of an interpersonal construct. Annual Review of Organizational Psychology and Organizational Behavior, 1(1), 2343. https://doi.org/10.1146/annurev-orgpsych-031413-091305.CrossRefGoogle Scholar
Grant, A. M., & Pollock, T. G. (2011). Publishing in AMJ—Part 3: Setting the hook. Academy of Management Journal, 54(5), 873879. https://doi.org/10.5465/amj.2011.4000.CrossRefGoogle Scholar
Hollenbeck, J. R., & Wright, P. M. (2017). Harking, sharking, and tharking: Making the case for post hoc analysis of scientific data. Journal of Management, 43(1), 518. https://doi.org/10.1177/0149206316679487.CrossRefGoogle Scholar
Popper, K. R. (1935). Logik der Forschung [The logic of scientific discovery]. Springer.Google Scholar
Sparrowe, R. T., & Mayer, K. J. (2011). Publishing in AMJ—part 4: Grounding hypotheses. Academy of Management Journal, 54(6), 10981102. https://doi.org/10.5465/amj.2011.4001.CrossRefGoogle Scholar
Spector, P. (2024, February 26). A peer reviewer crisis in the organizational sciences. https://paulspector.com/a-peer-reviewer-crisis-in-the-organizational-sciences/.Google Scholar
Weiss, M., Doden, W., Antino, M., Schmutz, J. B., & Unger, D. (2025). Where is my framework? The hidden perils of theorizing after results are known (TARKing). In: Academy of Management Annual Meeting, Copenhagen, Denmark.Google Scholar