Hostname: page-component-8448b6f56d-t5pn6 Total loading time: 0 Render date: 2024-04-16T23:07:12.415Z Has data issue: false hasContentIssue false

Creativity, Risk and the Research Impact Agenda in the United Kingdom

Published online by Cambridge University Press:  11 January 2018

Michael Power*
Affiliation:
London School of Economics and Political Science, Houghton Street, London WC2A 2AE, UK. Email: m.k.power@lse.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

This article describes the recent requirement for UK universities to account for the social and economic impact of their research, and asks whether this impact agenda may change the conduct of research itself. Three critical issues are highlighted: the epistemology of impact; the problem of quantifying qualities; and the likelihood of impact growing in significance and changing the landscape of research – so-called ‘impact creep’. Overall, the article identifies some features of the research impact agenda that pose risks to creativity and risk-taking by academics.

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© Academia Europaea 2018

Introduction

The problem with creativity is that we do not really know what it is. Is it an input, a process or some kind of outcome? Perhaps the moment we say what it is, it becomes something else and we are dissatisfied – it cannot be grasped. The confusion is compounded by neo-romantic notions of individualized creativity as if teams, groups and even locations (cf. Silicon Valley) cannot be creative. And we often judge creativity because it produces the outcomes that we like and legitimate as societies. Creativity is therefore as much a function of collective social judgement and approbation as it is a thing in itself. Yet despite these and many other problems with the concept of creativity, we don’t want to give up on it. This is in part because it signifies a distinctive conception of human agency at its most noble, and in part because we think it is fragile and increasingly at risk.

It is not hard to find this sense of pessimism in European academic life. It is widely regarded as having become more ‘managerial’ over the last two decades and it is said that creativity has been displaced by conservatism and standardization in both research and teaching. But while this is a seductive narrative about academic life, and one that appeals to a sense of loss and disenfranchisement, it is also one that requires critical challenge.

In this paper, the notion of creativity is replaced with that of risk-taking. It is not that ‘risk-taking’ is any clearer than creativity as an idea. Even the idea of taking-risk is dependent on what is valued and what is regarded as being at risk. However, risk provides a more limited and specific focus – both on the motivations of academics (including career risk management) when they conduct research and importantly on the institutional climates which influence those motivations. In short, the question is: are universities places where intellectual risk-taking is encouraged and supported? More specifically, is the pursuit of agendas encouraged where problem definition is non-normal, where outcomes are highly uncertain and where use-value plays a limited role in systems of academic performance evaluation?

We should be careful about these questions. Kuhn’s famous distinction between normal and revolutionary science has been the subject of much debate and philosophical criticism, but from a policy point of view it reminds us that not all research can be ‘risky’ revolutionary and paradigm changing and that much research is concerned with the detailed working out and incremental extension of existing bodies of knowledge within stable ‘epistemic communities’.Reference Kuhn 1 Furthermore, we can never be sure how to turn Kuhn’s historical thesis about the science into prescriptions for an organization and for a division of scientific labour between the normal and the abnormal. In my preferred terminology, how do we design and support a portfolio of research with varying degrees of risk?

With these complexities in mind, this article explores the question of risk-taking in research in the context of a recent specific reform to the governance of UK universities, namely the requirement for UK academics to demonstrate the impact of their research. The background and nature of this requirement are described in the next section before a more critical analysis of three dimensions of the research impact agenda: the epistemology of impact; the quantification of quality; and impact creep. The article concludes with some reflections on risk-taking in research in the face of the impact requirement. This requirement may be a peculiar obsession of the British, but it is likely that it will surface in other university systems and is therefore of wider European relevance.

The Rise of ‘Impact’ as a Performance Value

The problem of how to link university research to national economic performance has been a longstanding policy interest in the UK and many other nations.Reference Narin, Hamilton and Olivastro 2 More generally, policy-makers in developed economies have wrestled for many years with the problem of how to measure whether public interventions are having desired outcomes. Within a general public policy requirement to demonstrate ‘value for money’, there has been a shift in focus from cost control to measurable outputs and, ultimately, to harder to measure outcomes. In turn, this interest in outcomes has come to be articulated in terms of impact, notably in the contexts of development and environmental impact analysis.

Accordingly, before the publication in 2006 in the UK of the ‘Warry Report’, which recommended that universities should measure the impact of their research outside the academy, impact was already a legitimized policy value and goal. 3 Following much negotiation, the recommendations of the Warry Report resulted in a new requirement for the 2014 UK Research Evaluation Exercise (hereafter REF2014): 20% of funding for universities would be awarded for being able to demonstrate ‘beneficial impact’ defined widely to include social and cultural benefit as well as economic. Impact could also be international and not simply for the benefit of the UK as originally envisaged. So, despite initial resistance and scepticism, impact was established as a new performance norm for universities with significant financial consequences.

The initial policy ambition to develop metrics for impact gave way to a more pragmatic approach by regulators and universities in which the ‘unit of account’ would be qualitative in form – the impact case study (ICS). Pilot studies were conducted which led to the development of a standardized template for ICS production but one that was pluralistic about the kinds of impact and the forms of evidence that might support that claim for impact. As the implementation process progressed, there was also a need to provide guidance for many detailed and complex issues relating to such things as the time-window (in effect the accounting period) for impact and boundary issues where individuals had moved from one university to another.

When it came to the detailed production of ICSs many academics found the new requirement challenging. In contrast to starting with the research output itself, they were forced to begin with the question ‘what has changed in the outside world as a result of my research?’ Conversely, academics had to learn what impact ‘was not’. For example, it was not a prestigious public lecture and it was not meeting with practitioners to talk about their problems. These things might be regarded as ‘pathways to impact’ and part of ‘knowledge exchange’ but were not themselves examples of impact. Accordingly, the writing of ICSs required many academics to adopt a very new orientation to their work.Reference Power 4

In the end, REF2014 assessed the research of 154 UK universities involving over 190,000 research outputs from 52,000 staff, just under four outputs per staff member. 5 Significantly, 6975 impact case studies were produced and evaluated, amounting to one ICS for nearly every nine members of research active staff in the UK. Importantly therefore, for the purposes of REF2014, not all research and researchers were required to be ‘impactful’. Just as with the peer review of research in the REF2014 process, the thousands of ICSs for each subject area, and for universities as a whole, were evaluated and graded for their quality (a scale of 1–4, where 4 is the highest quality). Using these scores, league tables and rankings were produced by the Times Higher Education magazine in the UK. Encouragingly perhaps, the Institute for Cancer Research emerged as the most impactful organization. But there were also subject-specific rankings for impact in the humanities, with Birmingham University topping the table for philosophy, a discipline whose impact could reasonably be assumed to emerge over centuries rather than a decade. 6

The Epistemology of Impact

It might be assumed that the real problem of the impact agenda is that it is forcing academic research work to be more practical in orientation, more focused on use-value than on fundamental values of curiosity-driven knowledge production as such. This would be a mistake. Many ideas originating within the academy have shaped common-sense thinking and practice in many different fields. Many academics willingly disseminate their work to policy-makers and other audiences in the hope of having influence or simply because such communication of science is regarded as valuable in itself. The difference with the impact agenda in REF2104 is that this has now been turned into an explicit performance requirement and engagement has been reconceptualized as part of the ‘pathway to impact’. And although it is readily argued that impact is complex and multifaceted, in fact the implicit epistemology is rather simple: research is conducted; the researcher disseminates the research via forms of engagement; and the research has an impact for which evidence can be collected (by the researcher or her institution). This is what might be called the ‘billiard ball’ model of impact, which posits an intuitive causal relationship between independent variables.

The problem with this model is that, except in very restricted settings, such as cancer research noted above, it is not descriptive of a research dissemination process that is far less determinate than policy documents make it seem. However, researchers have dealt with the ‘epistemological’ problem of impact in a number of interesting and, yes, creative ways. In essence, they have transformed the problem of the indeterminacy of evidence of impact into a determinate one in a clever way. In short, they have created their own evidence. The more dignified label for this is ‘solicited testimony’, which is recognized as a legitimate form of evidence collection in evaluation studies. In essence, the researcher seeks testimony from identified users of her research who kindly confirm that they have been ‘impacted’ by it.

The device of solicited testimony is attractive for many reasons. First, by definition, evidential traces of impact – if they exist – lie beyond the field of academia, making them costly to discover. Indeed, many UK researchers realized that determining the impact of their research was itself a complex research project in need of (non-existent) resources. Solicited testimony is attractive from this point of view as a low-cost evidence form, easily collected from willing respondents. Second, solicited testimony solves the well-known problem of causal attribution when impacts are likely to be an outcome of many different factors. By definition, the testimony constructs the causal link between the research and the claimed impact, subject to the testimony itself being legitimate and from a trustworthy and independent source. Third, and this is perhaps a more subtle aspect of solicited testimony, the process of solicitation constructs both the trace of impact in the form of a statement to the effect that such and such happened as a result of the research and also the research ‘impactee’ itself as a new category of agent. Fourth, and relatedly, solicited testimony makes research impact auditable by providing a trace of the impact that is easily documented.Reference Power 7 In particular, solicited testimony epitomizes a strategy whereby external sources of evidence are interiorized by the impact accounting process. In general, this process requires that traces of impact outside the academy are appropriated and edited in order to support the claims of impact.

It should not be assumed that this ‘constructivist’ view of the relation between research, impact and impactees implies that impact is easy to audit or evaluate. On the contrary, evaluators expressed concerns about the over-use of solicited testimony as an evidence form and stated that: ‘it was hard to assess the significance of an impact where the evidence was ‘nuanced’ and in the form of corroborating testimonials’. 8 So the ICS field does not reveal a regulatory style that is confident in its ability to evaluate.

In summary, the case of solicited testimony points to the more general phenomenon that new performance and accounting requirements generate creative strategies by those held to account. The policy object of ‘impact’ has been operationalized and accounted for in a very specific way. The pure model of pathways to eventual impact was largely just that – a model or idea that was both impossible to realize and which provided a misleading epistemology. Yet this implies that academics somehow took control of the research impact agenda and simply shaped it for their own purposes. As discussed below, this was not quite the case.

Quantifying Qualities?

Despite initial ambitions for a metrics-based system, the impact accounting requirement in the UK was non-quantitative in spirit because the basic unit of account was the ICS. The early pilot studies conducted by the UK regulator revealed that a case study approach was the best operationalization of the demand to demonstrate impact. Hence, although the new requirement is motivated by a desire to measure impact, it does not end up as a tyranny of numerical ‘transparency’.Reference Strathern 9

However, if the ICS as the unit of account is not itself metrics-based it nevertheless feeds an evaluation system that produces a grade score for each ICS. In turn, the aggregate scores can be combined in many different ways to form grade point averages (GPAs) for subject areas and for universities. And, as noted above, these GPAs enabled external agencies, such as newspapers, to construct rankings of various kinds for impact.

This chain of transformation from the ICS to grading to ranking raises some interesting questions about the point in accounting systems at which qualities get transformed into quantities for further combination and aggregation. Sociologists tend to focus on the metrics themselves and their effects on organizational actors.Reference Espeland and Stevens 10 This reveals a tendency to ignore the qualitative pre-construction and pre-reduction that makes that quantification of qualities possible. In the case of impact accounting this took a number of forms. First, the format of the ICS was tightly prescribed in terms of sections and each section had a prescribed word count. So, although the ICS was strictly qualitative in form it was nevertheless precisely specified in a way that would enable the evaluation and grading process. Second, this pre-reduction of the ICS was also facilitated by weakening the ownership of academics and using journalists to edit and write the case studies in an accessible way, simplifying claims that academics would, by virtue of their training, tend to frame modestly and with caveats. Indeed, ICSs are designed for immodesty.

Overall, the ICS is a hybrid accounting document which is pre-constructed to enable a metric, a score, to be attributed to it. Even academics who wholly embraced the research impact agenda, who considered themselves to be highly impactful, and who had extensive evidence of that impact, would nevertheless experience the ICS as a highly constrained ‘accounting’ form. This is not very surprising – to be effective all forms of accounting are necessarily reductive and standardizing in order that that the accounting process can be practical. In terms of reducing complexity and nuance, the ICS is more quantitative than it appears on the surface. The ICS is a servant of metrics.

Impact Creep?

Formally, the requirement for UK universities to demonstrate the impact of their research is limited in scope. Not all research is required to be impactful and relatively few members of the research population (approximately 1 in 9 as noted above) produced ICSs. Yet we should not expect the impact agenda to remain contained in this way, particularly as it will continue to find favour with UK state funding agencies who are likely to press for an increase in scope.

Whereas, for the REF2014, UK universities had to react to a new requirement and develop the capability ad hoc, since that time they have begun to build ‘impact infrastructures’ and academics are being supported by newly appointed impact officers and new devices such as ‘impact trackers’. In short, universities are building support structures for the collection of evidence of research impact. In this way the impact agenda is being hard-wired into organizational routines and processes. As a result, impact issues will become increasingly difficult for all academics to avoid and ignore.

A related feature of the ‘impactization’ of the UK research landscape concerns the shift in impact from an outcome to a performance target. For the 2014 impact evaluation, the research being evaluated for impact had not been conducted with the explicit intention of having impact. In addition, the evaluation allowed for long gestation periods for research to have impact within a specific time window. So, for example, something written in 1997 with, say, a demonstrable impact on UK policy thinking in 2010 was unlikely to have been written with such an impact in mind. The impact was a fortuitous outcome that had to be discovered ex post.

In contrast, following the 2014 exercise, UK universities have turned impact into a target. This means that researchers must begin to think of their research in terms of its prospective impact and must plan to support this. Grant applications in many countries have always required wider engagements and strategies for the communication of results by researchers. This is not new. What is new is the transformation of such recognition of the social benefit of research into the need to produce explicit strategies for research to have impact. We know that accounting systems are not neutral mirrors of activity, and we know Goodhart’s law, which states that measures cease to be good measures when they become targets. Is this happening to UK academics in the case of impact?

There are signals – albeit weak ones – that the impact requirement is indeed changing behaviour and is likely to continue to do so. The first signal arises from the changing time-frame for impact: so-called pathways to impact are becoming shorter. Whereas the 2014 exercise allowed for a 17-year time-frame for impact, in fact most studies operated with much shorter time-frames (5–6 years). Yet we would expect this timescale to come down as researchers’ behaviour changes to manage impact more explicitly as a condition of obtaining public funding. Indeed, there is the paradoxical possibility that researchers will feel pressure to produce impact before or in parallel with the research process itself. From this point of view, the so-called research outputs themselves become a by-product of the impact, not the thing that drives it. Another way of putting this is that, despite denials, there is underlying and unintended pressure for research to become more advisory in nature

A second issue can only be highlighted anecdotally and deserves more empirical investigation. The impact agenda is changing ways of talking and ways of acting. Of course, academics, like other social agents, are likely to adopt ironic attitudes to new performance requirements, but as the word ‘impact’ crops up in conversations with increasing frequency, attention may shift. And subtle shifts in discourse are known to lead to changes in actions. Such changes are reinforced by research committees that explicitly encourage new habits to track possible impact. Many academics are being advised to log emails as potential sources of data that they might have discarded in the past. Furthermore, the solicitation of testimony is being changed from a hastily constructed effort to provide support for research impact into a form of continuous activity and impact vigilance. This author heard recently that a published research report with his colleagues had been used internally in a large corporation. A decade ago this would have been a matter for passing interest; now there is a need to get more evidence about the impact.

What are the consequences of this creeping ‘impactization’ of the research agenda? At present one can only point to subtle shifts rather than large-scale changes. For example, the impact agenda requires attention and that takes time that might be spent in other ways. But the critical issue is whether the value attributed to impact will discourage forms of research of a more fundamental nature where impact may not be demonstrable at all or, at best, for many years. Conversely, will researchers gravitate to issues where there is likely to be demonstrable impact over a short period of time, typically in a direct advisory capacity? At stake in all this is the question of intellectual risk-taking or, rather, designing the appropriate balance of risk-taking and incrementalism in a research portfolio. Prima facie we can expect the UK impact requirement to extend its influence over research, fund-raising and career development.

Conclusion: Impact and Risk-taking in Research

It is well-known that universities are complex organizations with multiple goals which require compromise and trade-offs. Not only are there trade-offs between investments in research and teaching, but also, even within research-committed institutions, there is complexity created by what have been called ‘multiple institutional logics’.Reference Greenwood, Raynard, Kodeih, Micelotta and Lounsbury 11 Put simplistically, it can be argued that research activity is subject to two different logics or ideal-typical value-orientations. One is that of autonomous, curiosity-driven activity (‘science as vocation’), the other is that of use-value or application. These logics have always been co-present, varying in their relative strength across institution type and academic field. However, the UK REF2014 represents not only a policy-driven recalibration of their relationship in favour of use value, but also a reconceptualization of ‘use-value’ in terms of ‘impact’ as discussed earlier.

Some disciplines, such as social work, have found in the impact agenda a basis for reworking their disciplinary identity, others – such as the humanities – have found it alien. This means that the effects of the ‘impact agenda’ are undoubtedly varied; there is no single story to be told about UK academia. Indeed, while some see the impact agenda in the UK as the next logical episode in the rationalization and managerial control of academic work, others regard it as an overdue and legitimate policy demand by states and taxpayers who are willing to take a bet on most research funding, but require some evidence of benefit in some cases. Yet the key question from both points of view is how might the research impact agenda influence intellectual risk-taking in research? What organizational and behavioural changes does it bring about?

As discussed above, the simplified epistemology can be self-fulfilling where researchers are incentivized to work on ‘easy-impact’ problems rather than the complex kind of engagements involved in being a public intellectual who seeks to influence public policy. The impact agenda is therefore not to be equated with expanded public engagement and may even lead to the opposite. Second, this self-fulfilling concept of impact is supported by an accounting document – the ICS – which supports simplification both for metrical grading and for auditability reasons. Third, impact creep is likely by virtue of the creation of management infrastructures and roles to ‘support’ impactful research.

However, we should be careful in concluding too strongly about the ‘impact of impact’. Not all apparent tendencies to a decline of intellectual risk-taking should be laid at the door of the UK impact agenda. The worldwide emergence of journal lists and rankings have created a kind of ‘top journal’ conservatism among many researchers. In short, careerist incentives among researchers have resulted in a gravitation to journals which, by their central and established position in a field, find it difficult to be receptive to very novel kinds of research, both in terms of method and results. Arguably, these forces of career risk-management in academic life, if they exist, are longer established and more powerful than those of the impact agenda. Intrinsic interest, peer group recognition and publications in recognized high quality outlets are still very strong motivational factors in research culture in UK universities. Being impactful outside the academy does not yet have the status and prestige that other dimensions of performance convey. UK universities may hope for impact but most continue to reward academic journal publications.Reference Kerr 12

From this point of view, despite the arguments above, the requirement to demonstrate impact in UK is still a minor dimension to be managed in an academic environment that is already inherently conservative. Furthermore, the institutional demand for impact can be seen as a reaction based on policy perceptions that academic life has become specialized and inward-looking in tight circles of peers. We have seen efforts to disrupt this tendency to closed specialization in the emergence of interdisciplinarity as a value, and the rise of impact is most likely a similar kind of reaction. In short, it is arguable that risk-taking in research has been a problem long before the appearance of an impact agenda, which is more symptom than cause of intellectual conservatism.

Finally, one seemingly puzzling feature of the UK impact agenda deserves mention. Contrary to expectations, being an intellectual who is engaged in public policy and frequently appears on TV (the ‘media Don’) is increasingly regarded with suspicion on two counts. First, such an individual deviates from the journal-focused academic validation system and, second, they are not impactful academics who can define impact very precisely, and manage it into existence over short time-frames via a process in which research and advice are blurred. From this point of view, the UK research impact agenda is not a new risk to creativity in academic life as noted above, but it may well be a risk to a style of public policy engagement by academics that values collective discussion over impact.

Michael Power is Professor of Accounting at the London School of Economics and a Fellow of the British Academy. His research and teaching focus on regulation, accounting, auditing, internal control, risk management and organization theory. His major works include: The Audit Society: Rituals of Verification (Oxford University Press, 1997); Organized Uncertainty: Designing a World of Risk Management (Oxford University Press, 2007) and Riskwork: Essays on the Organizational Life of Risk Management (Oxford University Press, 2016).

References

References and Notes

1. Kuhn, T. (1970) The Structure of Scientific Revolutions (Chicago: University of Chicago).Google Scholar
2. Narin, F., Hamilton, K. and Olivastro, D. (1997) The increasing linkage between US technology and public science. Research Policy, 26, pp. 317330.Google Scholar
3. Warry Report (2006) Increasing the Economic Impact of Research Councils: Advice to the Director General of Science and Innovation from the Research Council Economic Impact Group (London: BIS).Google Scholar
4. Power, See M. (2015) How accounting begins: object formation and the accretion of infrastructure. Accounting, Organizations and Society, 47, pp. 4355.CrossRefGoogle Scholar
7. Power, M. (1996) Making things auditable. Accounting, Organizations and Society, 21(2/3), pp. 289315.Google Scholar
8. See Technopolis Ltd. (2010) REF Research Impact Pilot Exercise: Lessons Learned Project: Feedback on Pilot Submissions. London, November.Google Scholar
9. Strathern, M. (2000) The tyranny of transparency. British Education Research Journal, 26(3), pp. 309321.Google Scholar
10. Espeland, W. and Stevens, M. (1998) Commensuration as a social process. Annual Review of Sociology, 24, pp. 313343.Google Scholar
11. Greenwood, R., Raynard, M., Kodeih, F., Micelotta, E.R. and Lounsbury, M. (2011) Institutional complexity and organizational responses. The Academy of Management Annals, 5, pp. 317371.Google Scholar
12. Kerr, S. (1975) On the folly of rewarding A, while hoping for B. The Academy of Management Journal, 18(4), pp. 769783.Google Scholar