Skip to main content
×
Home
    • Aa
    • Aa

Information:

  • Access
  • Cited by 11
  • Cited by
    This article has been cited by the following publications. This list is generated based on data provided by CrossRef.

    Plakoyiannaki, Emmanuella Wei, Tian Hsu, Carol Cassell, Catherine and Prashantham, Shameen 2017. Management and Organization Review Special Issue ‘Doing Qualitative Research in Emerging Markets’. Management and Organization Review, Vol. 13, p. 455.


    Liu, Yipeng Sarala, Riikka M. Xing, Yijun and Cooper, Sir Cary L. 2017. Human Side of Collaborative Partnerships. Group & Organization Management, Vol. 42, p. 151.


    Li, Ming Sharp, Barton M. and Bergh, Donald D. 2017. Assessing Statistical Results in MOR Articles: An Essay on Verifiability and Ways to Enhance It. Management and Organization Review, Vol. 13, p. 431.


    Murmann, Johann Peter 2017. More Exploration and Less Exploitation: Cultivating Blockbuster Papers for MOR. Management and Organization Review, Vol. 13, p. 5.


    Plakoyiannaki, Emmanuella Wei, Tian Hsu, Carol Cassell, Catherine and Prashantham, Shameen 2017. Management and Organization Review Special Issue ‘Doing Qualitative Research in Emerging Markets. Management and Organization Review, Vol. 13, p. 205.


    2017. Preapproved and Preregistered Studies. Management and Organization Review, Vol. 13, p. 463.


    Levine, Sheen S. Bernard, Mark and Nagel, Rosemarie 2017. Strategic Intelligence: The Cognitive Capability to Anticipate Competitor Behavior. Strategic Management Journal,


    Gurkov, Igor Morgunov, Evgeny and Saidov, Zokirzhon 2017. Robustness and flexibility of human resource management practices. Employee Relations, Vol. 39, p. 594.


    Meyer, Klaus E van Witteloostuijn, Arjen and Beugelsdijk, Sjoerd 2017. What’s in a p? Reassessing best practices for conducting and reporting hypothesis-testing research. Journal of International Business Studies, Vol. 48, p. 535.


    Chua, Roy Y. J. and Ng, Kok Yee 2017. Not Just How Much You Know: Interactional Effect of Cultural Knowledge and Metacognition on Creativity in a Global Context—ADDENDUM. Management and Organization Review, Vol. 13, p. 301.


    Shapiro, Daniel and Oh, Chang Hoon 2017. Editorial. Multinational Business Review, Vol. 25, p. 2.


    ×

Actions:

      • Send article to Kindle

        To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.

        Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

        Find out more about the Kindle Personal Document Service.

        The Critique of Empirical Social Science: New Policies at Management and Organization Review
        Available formats
        ×
        Send article to Dropbox

        To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about sending content to Dropbox.

        The Critique of Empirical Social Science: New Policies at Management and Organization Review
        Available formats
        ×
        Send article to Google Drive

        To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about sending content to Google Drive.

        The Critique of Empirical Social Science: New Policies at Management and Organization Review
        Available formats
        ×
Export citation

Abstract

At the June 2016 meeting of the International Association for Chinese Management Research, MOR organized a symposium to discuss the mounting criticisms of empirical social science and subsequent changes, as part of ongoing discussions affecting journal reviewing policies. This article overviews the history of modern empirical social science as the foundation of management, organization, and strategy research and the criticism of social science research, which has reached the point that some critics refer to current publication norms as encouraging and enabling the publication of junk science. Most importantly, however, this article outlines MOR's strategy going forward and the new reviewing initiatives that MOR is implementing as of Volume 13 (2017).

Several markers define the glorious outpouring of theorizing and systematic empirical research that has supported management, organization, and strategy since the mid-1950s. Any list of these markers should include two seminal reports published in 1959. The first, commissioned by the Ford Foundation and written by Robert Aaron Gordon and James Edwin Howell was Higher Education for Business. The second, commissioned by the Carnegie Foundation and written by Frank C. Pierson was The Education of American Businessmen: A Study of University-College Programs in Business Administration. The reports gave a jolt to the established research and education in management, propelling them into the realm of social sciences. They initiated a 50-year-long effort to establish scholarly respect for business school research.

Any discussions of markers must also include the publication, around the same time, of Simon's Administrative Behavior (1947); Blau's The Dynamics of Bureaucracy (1955); Gouldner's Patterns of Industrial Bureaucracy (1955); March and Simon's Organizations (1958); Burns and Stalker's The Management of Innovation (1961); Blau and Scott's Formal Organizations (1962); and Cyert and March's Behavioral Theory of the Firm (1963). The list of markers should also include the founding, on December 1, 1953, of the Institute of Management Sciences and the journal Management Science, with C. West Churchman as its founding Editor-in-Chief. By the time Churchman stepped down as editor, Management Science was recognized as a leading management journal, publishing research in the disciplines related to business, including game theory, organization psychology, leadership, and the epistemology of science, which was a concern for Churchman. In 1956, Administrative Science Quarterly was founded. In its inaugural essay, On Building an Administrative Science, James D. Thompson voiced an aspiration for research patterned after the rigor and precision of physics and the practice of engineering. This essay probably hastened the race for quantification in management research. In 1961, the Aston group of organization researchers, led by Derek S. Pugh, began to conduct surveys and statistical analyses in organization research, as the pioneer on the journey toward a science of administration (Loveridge, 2013).

‘Science is highly esteemed’, wrote Chalmers (1999: xix), and the word ‘science’ in the name of the two pioneering journals signified the hope of elevating management research to a respectable social science discipline. It is true that whether a subject is regarded as scientific is a matter of convention (Popper, 1959), but the ‘science’ label was not attached arbitrarily. The fact that management research differs from physics did not imply that it had to adopt completely different methods or norms of practice. Rather, then as now, social and natural sciences share similar methods and norms – they share a certain degree of scientificity (see Tsang, 2017: ch. 8). This editorial assumes that management researchers still consider themselves social scientists, who conduct research that strives to affect managers, employees, stakeholders, and society at large.

RECENT CRITICISMS OF MANAGEMENT AND ORGANIZATION THEORY

Recently, the field has been criticized for its pursuit of novelty over truth, its lack of connection with the practice of management and organization practice, and the vulnerability of its scientific claims.

On the sixtieth anniversary of Administrative Science Quarterly, Gerald Davis, its Editor-in-Chief, lamented that organization theory had come to value novelty, curious oddities, and supposed counterintuitive findings over ‘truth’ and the accumulation of knowledge (2015). Stephen R. Barley (2016), a past editor of ASQ, concurred and pointed out that this tendency is reinforced in the way doctoral students are trained, journal reviewers and editors select manuscripts, and universities award positions and promotions. Their protests recall the observation by the Nobel laureate Paul Krugman (2009) that a scientific preference for beauty over truth led to the neglect of the limitations of human rationality, institutions, and markets, making theories elegant but hardly predictive. Such tendencies may be institutionalized but should not be taken for granted: Einstein's theory of relativity replaced Newtonian mechanics not because it was novel or counterintuitive but because it explained observed reality better (Tsang, 2017).

Business schools err when they choose a performance evaluation convention that overemphasizes the number of publications over their quality and when they, somewhat arbitrarily, rank journals as A, B, or C. It seems that business schools are substituting journal impact scores, a notoriously unreliable measure of a scholar's productivity (Baum, 2011, 2012, 2013), for careful evaluation of substance. Thus, researchers have neglected central issues of the times, such as the sharing economy, on-demand employment of workers, the demands and consequences of economies in transition, and social issues, such as the growing income gap, social inequality, and sustainability. Not only has management research failed to find practical applications for theoretical ideas but the field has avoided addressing serious challenges facing organizations and society. Moreover, an unanticipated consequence of the quest for novelty and ‘interesting’ theoretical tweaks has been the fragmentation of theoretical frameworks as well as the emergence of ideologically based theories and supporting empirical research (Lewin et al. 2004). Tsui (2016) contends that business school research is disconnected from practice and has an overwhelming pro-management bias. She develops a strong argument that business school research must also consider the societal challenges that are consequences of the existing business system in the United States but also in other economies, especially transition economies (see also Corley & Gioia, 2011; Ghoshal, 2005; Sarasvathy, 2003; Shapiro, Kirkman, & Courtney, 2007; Starbuck, 2004; Walsh, Arora, & Cohen, 2003). We see a serious disconnect between management research and practice. For example, few faculty members use their research for teaching. Fewer still write articles that managers find useful.

Yet perhaps most damning is the prevailing critique of empirical social science, which questions the validity of our scientific claims.

RESEARCH RIGOR, REVISITED

More than six decades after the Ford Foundation and Carnegie Commission reports, social science frequently falls short of scientific standards of falsifiability, replicability, and data transparency, which are foundational for accumulating scientific knowledge, the ideal that Thompson (1956) envisioned.

‘The truth is under attack’, observed Levine (2012), and this impression has only intensified since he expressed that sentiment. An extensive survey shows how common ‘questionable research practices’ are among psychologists (John, Loewenstein, & Prelec, 2012): Roughly two-thirds of respondents admitted failing to report all the dependent measures in a study. Half of them confessed to reporting selectively, discussing only results that ‘worked’. A third acknowledged HARKing: claiming unexpected results as if they had been hypothesized in advance (Kerr, 1998). Even more troubling is that respondents often acted naively, not realizing that such selective reporting can constitute ‘p-hacking’: disguising a false proposition as a true one with supposedly high statistical significance (Simmons, Nelson, & Simonsohn, 2011). Such practices can generate findings that are ‘counterintuitive’ simply because they are false positives. If a researcher rummages through data to look for ‘statistically significant’ relationships, which are then used to form hypotheses, this researcher is lending false support to a theory, reporting as if the theory were predictive of these results. In reality, the theoretical predictions were never tested; they were just found in the data. The findings may be true or may be just coincidental – that is never ascertained. This violation makes theories appear stronger than they are, causing scholars to mistakenly rely on them rather than question their validity (Starbuck, 2016: 172).

Backed by extensive analysis, van Witteloostuijn (2015) argues that social science research and management research specifically have been overwhelmingly violating basic falsifiability principles as advocated by Popper (1959). According to him, the obsession and preference among journals to publish ‘cutting-edge’ and ‘groundbreaking’ findings have had the unfortunate negative consequence of leading journals and the peer review process almost always publish only positive empirical findings, not requiring reporting or discussion of negative or null findings or analysis of outlier observations and rarely, if ever, publishing replication studies (see also Bedeian, Taylor, & Miller [2010], which reports on 11 different types of questionable research conduct, including data fabrication, data falsification, plagiarism, HARKing, inappropriately accepting or assigning authorship credit, and publishing the same data or results in two or more publications).

Public attention is drawn to stories of data fabrication, but lax research practices can be more damaging: They are less dramatic yet swamp journals, monographs, and textbooks with false conclusions. Recently, two herculean projects sought to assess the veracity of published research. In psychology, the Open Science Collaboration (2015), composed of 270 scholars from around the world, attempted to replicate 100 studies, randomly selected. Most of the original studies reported positive findings (many of them, when published, were surely regarded as ‘significant’ and ‘counterintuitive’). But when researchers attempted to replicate the findings, they were able to do so for only 39% of the studies. Averaging across studies, the effect size was half of that originally reported.

In economics, a similar effort examined 18 experiments published in the American Economic Review and the Quarterly Journal of Economics (Camerer, Dreber, Forsell, Ho, Huber, Johannesson, Wu, 2016). In this case, researchers replicated only 61% of the findings. They found that average effect sizes were considerably smaller than originally reported. Closer to home, Goldfarb and King (2016) assessed a sample of 300 published studies. They estimated that 24–40% of the studies could not be replicated. A recent special issue on replication (Ethiraj, Gambardella, & Helfat, 2016) yielded a multitude of nonreplicable findings, including some that have been cited frequently (e.g., Tsang & Yamanoi, 2016). The situation is not necessarily better in the life sciences. After all, it was a medical journal in which Ioannidis (2005) declared that ‘most published research findings are false’. In cancer research, Begley and Ellis (2012) disclosed that scientists could replicate only 11% of published findings. A large replication project is now underway (Errington et al., 2014).

Getting Back on Track

Journals including the Strategic Management Journal (SMJ; Bettis, Ethiraj, Gambardella, Helfat, & Mitchell, 2016), Organization Behavior and Human Decision Processes (OBHDP; Chen, 2015), and the American Economic Review have adopted policies to counteract flaws in empirical social science. For example, the new guidelines for SMJ state that the journal no longer accepts papers that report asterisks or specific cutoff points for statistical significance (p-values). Authors are expected to report complete empirical results, including negative or null results, and discuss size effect and its interpretation. The editors of SMJ have also begun to publish replication studies as well as null results (Bettis, Helfat, & Shaver, 2016).

The OBHDP policies are directed specifically at increasing the reliability of published empirical papers. At the time that a manuscript is submitted, the accompanying cover letter must clearly state whether the data in the article are also the basis of articles in press or considered for publication elsewhere. Authors may be asked to include a table of variables, instruments, or participants in the study submitted to the journal that have been published elsewhere. The policy regarding data access and retention states: ‘authors may be asked to provide the raw data in connection with a paper for editorial review, and should be prepared to provide public access to such data if practicable, and should in any event be prepared to retain such data for a reasonable time after publication’. Moreover, OBHDP expects to implement a requirement that, as part of the review process, authors upload their data, syntax, and materials to an open depository, such as the one maintained by the Open Science Framework.

New Policies at Management and Organization Review

As guardians of scientific truth, leading journals cannot discount the need for reforms. But putting reforms in place is rarely easy (Starbuck, 2016). As the editors of MOR, we are extremely mindful of the challenges ahead in counteracting and addressing the criticisms of empirical social science research. The challenge and the direction are clear, yet we are sensitive to the imperative to find a way that best serves this journal and its unique community of management scholars. In the sections that follow, we outline new editorial policies and several initiatives that we believe can differentiate MOR from other journals.

Training and education

We will begin by engaging MOR Senior Editors and Editorial Review Board members to arrive at a shared understanding of new norms for conducting and reporting empirical results, which can accumulate into a reliable body of knowledge in management in the context of China and all other transforming economies. Because we believe that this journal reveals indigenous management theories, we are committed to helping scholars design and undertake research that satisfies the criteria of falsifiability and replicability, including data transparency, robustness, treatment of outliers, and null findings. MOR will offer workshops in China and in other transforming economies to establish a renewed understanding of the philosophical underpinnings of management research: the nature of assumptions, theory testing, generalization, post hoc analyses, replication, and qualitative case research.

Recognition for authors who share their work

‘If I have seen further, it is by standing on the shoulders of giants’, wrote Isaac Newton.[1] We recognize that science is a collective effort. Scholars build on the efforts of their predecessors and contemporaries: refining theories, testing predictions, honing instruments. To benefit from others’ work, one must have access to it. That is why the scientific currency is a peer-reviewed publication – making one's work publicly available. Yet any journal article has limitations on its length, so it necessarily omits some information that may be useful for those who wish to build on its author's research. Because of the current reproducibility crisis in science, in which the validity of much published research is questionable, fuller disclosure can bolster validity and renew trust in scientific findings.

MOR will recognize authors who share more than a manuscript by featuring designated badges that recognize exemplary scientific practices. Such badges have been introduced in leading journals in other disciplines, such as Psychological Science and the American Journal of Political Science, and are based on the principles of the Open Science Framework.

MOR will grant an Open Materials badge to authors who deposit their research materials in an open-access depository, such as those of the Open Science Framework (https://osf.io/) or As Predicted (https://aspredicted.org). The deposited materials should be as complete as possible to allow an independent researcher to reproduce the reported methodology. Depending on the methodology, materials may include statistical code, questionnaires, interview questions, experimental procedures, and participant instructions (but not data). Separately, MOR will grant an Open Data badge to authors who deposit their data in such an open-access repository. Authors can satisfy this requirement by depositing their entire dataset or by depositing a slice of it as long as it allows an independent researcher to reproduce the reported results. If confidentiality is sought, authors may also deposit a transformed dataset, as long as it allows reproduction of the reported results. Depending on the methodology, data may include quantitative and qualitative materials, but deposited data may not compromise the anonymity of participants or undermine promises of confidentiality.

Preapprovals — New support for authors

MOR will offer preapproval for studies, drawing on the model of registered reports in the natural and social sciences. To apply for preapproval, authors submit a proposal for a study, explaining its theoretical foundation, reviewing the relevant literature, elaborating on a research question, and proposing the source of data, whether existing or new. Essentially, authors submit what constitutes the sections about research question, literature review, and empirical design of a ready manuscript. But they do not provide analysis, results, or conclusion. The proposal will be reviewed, and, if it is accepted, the authors commit to collecting data and completing the study as proposed. In return, the journal guarantees publication – regardless of the findings. In other words, because of the importance of the subject matter, MOR will publish the results whether as hypothesized or not, whether positive or null.

The study is expected to be published in two parts: The first will report results of the study as approved, and the second will present and discuss post hoc analyses, which may arise while analyzing and reporting the originally approved study. A similar preapproval process will also be implemented for qualitative studies.

Preapproval is meant to counteract the prevalence of publication bias: Studies with positive results are more likely to be published than studies without results, thereby skewing the literature and miscommunicating the likelihood of certain outcomes (Begg & Berlin, 1988). The publication bias may have been exacerbated by the desire for novel results.

Hypothesis testing is not obligatory

MOR is encouraging authors to bypass the expectation of presenting exploratory research in the guise of hypothesis testing. We will consider exploratory research, meant to identify and describe phenomena, as well as confirmatory research, meant to test hypotheses generated from theory. We expect any manuscript to motivate a research question by framing it in existing literature, propose a plan for investigating it, and discuss the data for the study. Any statistical analysis should present and discuss all findings, positive, negative, or null.

Post hoc analysis is permitted if labeled as such. Hypothesizing after the results are known is a dubious practice (Kerr, 1998). But a study may include hypothesis testing alongside post hoc analyses, which explore relationships that were not originally hypothesized. This can happen, for instance, if new insights emerge during the analysis. Transparency in applying post hoc analyses will advance science by spurring researchers to conduct follow-up studies that compare findings.

Avoid cutoff points for statistical significance

When reporting statistical findings, authors should avoid referring to arbitrary cutoff points for statistical significance (p-values). As the American Statistical Association recently declared: ‘Practices that reduce data analysis or scientific inference to mechanical “bright-line” rules (such as p < 0.05) for justifying scientific claims or conclusions can lead to erroneous beliefs and poor decision making’ (Wasserstein & Lazar, 2016: 131). In this, they join a chorus of statisticians (e.g., Gelman & Stern, 2006) and management researchers (e.g., Bettis et al., 2016b; Starbuck, 2016). MOR will require authors to report coefficient estimates alongside exact p-values or standard errors.

Not all statistical effects are meaningful or important. For that, we expect that authors interpret the findings, especially effect size. Authors are expected to provide readers with a reasonable sense of how strongly an independent variable affects the dependent variable (see for example Aguinis et al., 2010). Reviewers and editors may also require that authors offer alternative theoretical explanations, which may be analyzed post hoc using same data or new data.

Access to data may be required

The publication of a scientific paper implies that authors are inviting others to replicate and build upon their findings. MOR, therefore, encourages authors to make their instruments and data available and recognizes those who do. However, during the review process, authors may be asked to provide reviewers with access to instruments and data, including questionnaires and field notes, variable definitions, transformations, and statistical procedures. Such materials will be kept confidential (as all submitted manuscripts are). If authors foresee difficulty in complying with this policy, they must disclose it at the time the manuscript is submitted. Reviewers may be asked to comment on their access to instruments and data.

The complete details of the MOR revised editorial policy inluding on implementing the preapproval option will be available in issue 13.1.

CONCLUSION

The criticism of empirical social science will not disappear. During the administration of President Ronald Reagan, the behavioral science budget of the National Science Foundation (excluding the economics program) was severely cut. That happened, in part, because empirical social science was not viewed as deserving a designation as a science. Journals would do well to adopt reforms, such as those discussed here, and help administrators recognize that the current conventions of promotion reinforce the perceived irrelevance of business school research. Moreover, the aspiration of this journal, as envisioned by its founding editor Anne Tsui and articulated in her collective writings, was to focus on discovering and giving voice to indigenous management research in transforming economies, such as China. MOR, as a relative newcomer, is more vulnerable to criticism. Hence, MOR is taking the lead in promoting, recognizing, and requiring high-quality research.

In doing so, MOR has an opportunity to attract research inspired by ideas in indigenous traditions, which reveal their contemporary significance through progressive engagement, variation, and reformation. In response to the emphasis on novelty elsewhere, MOR aims to attract indigenous research that is scientific, in the truest sense of the term, yet relevant to management and business.

NOTE

[1]Newton was rephrasing Bernard of Chartres, a 12th century scholar (see discussion in Merton, 1965).

REFERENCES

H. Aguinis , S. Werner , J. L. Abbott , C. Angert , J. H. Park , & D. Kohlhausen 2010. Customer-centric science: Reporting significant research results of rigor, relevance, and practical impact in mind. Organizational Research Methods, 13 (3): 515539.
S. R. Barley 2016. 60th anniversary essay: Ruminations on how we became a mystery house and how we might get out. Administrative Science Quarterly, 61 (1): 18.
J. A. C. Baum 2011. Free-riding on power laws: Questioning the validity of the impact factor as a measure of research quality in organization studies. Organization, 18 (4): 449466.
J. A. C. Baum 2012. The skewed few: Does ‘skew’ signal quality among journals, articles, and academics? Journal of Management Inquiry, 21 (3): 349354.
J. A. C. Baum 2013. The excess-tail ratio: Correcting journal impact factors for citation distributions. M@n@gement, 16 (5): 697706.
A. G. Bedeian , S. G. Taylor , & A. Miller 2010. Management science on the credibility bubble: Cardinal sins and various misdemeanors. Academy of Management Learning & Education, 9 (4): 715725.
C. B. Begg , & J. A. Berlin 1988. Publication bias: A problem in interpreting medical data. Journal of the Royal Statistical Society. Series A (Statistics in Society), 151 (3): 419463.
C. G. Begley , & L. M. Ellis 2012. Drug development: Raise standards for preclinical cancer research. Nature, 483 (7391): 531533.
R. A. Bettis , S. Ethiraj , A. Gambardella , C. Helfat , & W. Mitchell 2016a. Creating repeatable cumulative knowledge in strategic management. Strategic Management Journal, 37 (2), 257261.
R. A. Bettis , C. E. Helfat , & J. M. Shaver 2016b. The necessity, logic, and forms of replication. Strategic Management Journal, 37 (11), 21932203.
P. M. Blau 1955. The dynamics of bureaucracy. Chicago: Chicago University Press.
P. M. Blau , & W. R. Scott 1962. Formal organizations: A comparative approach. Stanford: Stanford University Press.
T. R. Burns , & G. M. Stalker 1961. The management of innovation. London: Tavistock Publications.
C. F. Camerer , A. Dreber , E. Forsell , T.-H. Ho , J. Huber , M. Johannesson , & H. Wu 2016. Evaluating replicability of laboratory experiments in economics. Science, 351 (6280): 14331436.
A. F. Chalmers 1999. What is this thing called science? (3rd ed.). Maidenhead, England: Open University Press.
X. P. Chen 2015. On data transparency and research ethics. Organizational Behavior and Human Decision Processes, 127: iv.
K. Corley , & D. Gioia 2011. Building theory about theory building: What constitutes a theoretical contribution? Academy of Management Review, 36 (1): 1232.
R. M. Cyert , & J. G. March 1963. A behavioral theory of the firm. Englewood Cliffs, NJ: Prentice Hall.
G. F. Davis 2015. Editorial essay: What is organizational research for? Administrative Science Quarterly, 60: 179188.
T. M. Errington , E. Iorns , W. Gunn , F. E. Tan , J. Lomax , & B. A. Nosek 2014. An open investigation of the reproducibility of cancer biology research. eLife, 3, e04333.
S. K. Ethiraj , A. Gambardella , & C. E. Helfat 2016. Replication in strategic management. Strategic Management Journal, 37 (11), 21912192.
A. Gelman , & H. Stern 2006. The difference between ‘significant’ and ‘not significant’ is not itself statistically significant. American Statistician, 60 (4): 328331.
S. Ghoshal 2005. Bad management theories are destroying good management practices. Academy of Management Learning & Education, 4 (1): 7591.
B. Goldfarb , & A. King 2016. Scientific apophenia in strategic management research: Significance tests & mistaken inference. Strategic Management Journal, 37 (1): 167176.
A. W. Gouldner 1954. Industrial bureaucracy. New York: Free Press.
J. Ioannidis 2005. Why most published research findings are false. PLoS Medicine, 2 (8), 696701.
L. K. John , G. Loewenstein , & D. Prelec 2012. Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychological Science, 23 (5), 524532.
N. L. Kerr 1998. Harking: Hypothesizing after the results are known. Personality and Social Psychology Review, 2 (3), 196217.
P. R. Krugman 2009, September 6. How did economists get it so wrong? New York Times, p. MM36. [Cited 7 November 2016]. Available from URL: http://www.nytimes.com/2009/09/06/magazine/06Economic-t.html?pagewanted=all
S. S. Levine 2012. Walter R. Nord and Ann F. Connell: Rethinking the knowledge controversy in organization studies: A generative uncertainty perspective. Administrative Science Quarterly, 57 (3): 537540.
A. Y. Lewin , C. B. Weigelt , & J. B. Emery 2004. Adaptation and selection in strategy and change: Perspectives on strategic change in organizations. In M. S. Poole and A. H. Van de Ven (Eds.). Handbook of Organizational Change and Innovation: 108160. Oxford: Oxford University Press.
R. Loveridge 2013. The Aston studies: A journey towards a science of administration? In M. Witzel and M. Warner (Eds.). The Oxford Handbook of Management Theorists: 129. Oxford: Oxford University Press.
J. G. March , & H. A. Simon 1958. Organizations. New York: John Wiley and Sons.
R. K. Merton 1965. On the shoulders of giants: A Shandean postscript. New York: Harcourt, Brace & World.
Open Science Collaboration. 2015. Estimating the reproducibility of psychological science. Science, 349 (6251).
K. Popper 1959. The logic of scientific discovery. New York: Hutchison and Co.
S. D. Sarasvathy 2003. Entrepreneurship as a science of the artificial. Journal of Economic Psychology, 24: 203220
D. L. Shapiro , B. L. Kirkman , & H. G. Courtney 2007. Perceived causes and solutions of the translation problem in management research. Academy of Management Journal, 50: 249266.
J. P. Simmons , L. D. Nelson , & U. Simonsohn 2011. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22 (11): 13591366
H. A. Simon 1947. Administrative behavior. New York: The Free Press.
W. H. Starbuck 2004. How much better are the most-prestigious journals? The statistics of academic publication. Organization Science, 16: 180200.
W. H. Starbuck 2016. 60th anniversary essay: How journals could improve research practices in social science. Administrative Science Quarterly, 61 (2): 165183.
J. Thompson 1956. On building an administrative science. Administrative Science Quarterly, 1 (1): 102111
E. W. K. Tsang 2017. The philosophy of management research. New York: Routledge.
E. W. K. Tsang , & J. Yamanoi 2016. International expansion through start-up or acquisition: A replication. Strategic Management Journal, 37 (11): 22912306.
A. S. Tsui 2016. Reflections on the so-called value-free ideal: Values and science in the business schools. Cross-Cultural and Strategic Management (formerly known as Cross Cultural Management), 23: 428.
A. van Witteloostuijn 2015. Toward experimental international business: Unraveling fundamental causal linkages. Cross Cultural Management: An International Journal, 22 (4): 530544
J. P. Walsh , A. Arora , & W. M. Cohen 2003. Working through the patent problem. Science, 299: 1020.
R. L. Wasserstein , & N. A. Lazar 2016. The American Statistical Association statement on p-values: Context, process, and purpose. American Statistician, 70 (2): 129133.