We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 2 is dedicated to a discussion of the conditions which enabled the emergence of research evaluation systems. Beyond the general phenomena that characterize modern society, such as rationalization, capitalism, or bureaucratization, there are also other elements that should be included as constitutive conditions for evaluative power. The chapter goes on to provide a systematic account of economization and metricization. It argues that these are the two main forces that shape contemporary academia, having enabled the emergence of research evaluation systems. Economization is defined as promoting the idea that science’s economic inputs and products should be utilized for bolstering the economy, while metricization reduces every aspect of science and social life to metrics. The varied impacts of these two forces are described by means of an expansive account of the social processes, values, logics, and technologies of power that undergird today’s research evaluation systems. This chapter therefore lays the foundations for one of the book’s key claims. This is that metrics should be understood as merely a symptom rather than the cause of the difficulties confronting academia today.
Chapter 1 sets out the conceptual framework through which the book examines research evaluation and names the key players and processes involved. It begins by outlining The Evaluation Game’s key contention that research evaluation is a manifestation of a broader technology which the book refers to as “evaluative power.” Next, it describes how the evaluative power comes to be legitimized and how it introduces one of its main technologies: research evaluation systems. The chapter then defines games as top-down social practices and, on the basis of this conceptual framework, presents the evaluation game as a reaction to or resistance against the evaluative power. Overall, the chapter shows how the evaluation of both institutions and knowledge produced by researchers working in them have, unavoidably, become an integral element of the research process itself.
Opening with a brief sketch of the evolution of research evaluation is followed by a description of the publication-oriented nature of academia today. The Introduction provides the necessary contextual information for investigating research evaluation systems. It then defines two critical blind spots in the contemporary literature on research evaluation systems. The first is the absence, within histories of the science of measuring and evaluating research, of the Soviet Union and post-socialist countries. This is despite the fact that these countries have played a key part in this history, from its very inception. The second relates to thinking about global differences in studies of the transformations in scholarly communication. It is stressed that the contexts in which countries confront the challenges of the publish or perish culture and questionable journals and conferences should be taken into account in discussions about them. Through its overview of diverse histories of evaluation and its identification of core issues in the literature, the chapter introduces readers to the book’s core arguments.
In the concluding chapter, the author investigates whether it is possible to move beyond the inevitability of metrics, and what doing so might imply. The author shows that the greatest challenge lies in individualized thinking about science and the focus on the accumulation of economically conceived value by institutions. This is because the problem does not lie in metrics. Rather, the problem is an underlying logic of economization, and it is only by uprooting it that one could change today’s academia. Still, any new logic would also be legitimized by new metrics. Therefore, this book’s conclusion is neither a proposal for a ‘responsible use of metrics,’ nor a call to abandon the use of all metrics in academia. A third way is needed. Thus the book’s key contribution is its call for a rejection of these two potential responses and its insistence on the necessity that we set out now on a course that can offer hope of charting such a third response. In this spirit, the author sketch out seven principles that should be kept in mind when rebuilding not only a new system of scholarly communication but, more importantly, an academia that is not driven by metrics.
Chapter 3 presents significant new background material that is critical for understanding research evaluation systems in Central and Eastern Europe. The chapter builds from the assertion that the history of research evaluation has been written largely from a Western perspective that has neglected science in the context of the Soviet Union and Imperial Russia. As a consequence, the beginnings of the scientific organization of scientific labor and the development of scientometrics in the first half of the twentieth century are missing from the literature. Related, research evaluation systems are often incorrectly characterized as technologies which came into existence forty years ago, introducing new ways of establishing relations between the state and the public sector. Aiming to correct these oversights, the chapter provides an in-depth analysis of research evaluation within the centrally planned science of the Soviet Union and countries of the Eastern Bloc. Thus it outlines how, decades before the rise of New Public Management and the first Western European systems, centrally planned science introduced a national (ex ante) research evaluation system and assessments of research impacts.
Putting the concept of the evaluation game to work in real-world settings in which the author has conducted both qualitative research and scientometrics analysis, Chapter 5 demonstrates the utility and uniqueness of this analytic tool. This turns around its promotion of a geopolitical perspective that obliges researchers to take into account the contexts in which the cultures of publish or perish take shape. The chapter explores how the key actors (players), that is institutions, managers, publishers, and researchers, play various types of evaluation game. Moreover, the chapter addresses the challenge of attributing causality to research evaluation systems and distinguishing the gaming from playing the evaluation game. Recognizing an activity as gaming or playing the evaluation game is not easy. The same activity (e.g., publishing in a predatory journal) may be considered gaming when it serves the purpose of maximizing profits, or playing the evaluation game when it is fulfilling evaluation requirements and the stakes in this game are not related to financial bonuses but to maintaining status quo in redefined work conditions.
Chapter 6 deals with the main areas in which the evaluation game transforms scholarly communication practices. Thus, it focuses on the obsession with metrics as a quantification of every aspect of academic labor; so-called questionable academia, that is the massive expansion of questionable publishers, journals, and conferences; following the metrics deployed by institutions, and changes in publication patterns in terms of publication types, the local or global orientation of research, its contents, and the dominant languages of publications. Finally, the chapter underlines the importance of taking a geopolitically sensitive approach to evaluation games that is able to account for differences in the ways in which the game is played in central versus peripheral countries, as well as in the ways in which such practices are valorized, depending on the location of a given science system. Such differences are not only the result of differential access to resources and shifting power relations but also, as argued in the book, of the historical heritage of capitalist or socialist models in specific countries and institutions.
Chapter 4 examines the diversity of research evaluation systems. It does so by considering representative national systems, that is, those implemented in Australia, China, Nordic countries (Norway, Denmark, and Finland), Poland, Russia, and the United Kingdom. The chapter begins with an examination of why the Journal Impact Factor has become the most popular proxy for research quality. Next, international citation indexes and university rankings are analyzed. Taking up Chapter 2’s insight that evaluative power deploys economization and metricization both as tools of modernization and as a means of controlling academia, the chapter then characterizes evaluative powers along three intersecting planes (global, national, and local). These have the greatest influence over the varied expressions of the evaluation game and allow for the elaboration of a comprehensive view of current research evaluation regimes in the Global North and South, and in countries of the East. The chapter goes on to show that while evaluation regimes operate in all parts of the world, each region has its own specificity, as detailed in this chapter.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.