To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter looks at social influences on neuroscience. It outlines that science is a social system, and subject to various social pressures that can affect what we study, how we study it, and how we interpret the data we obtain. This includes financial conflicts of interest, claims to priority, scientific prizes, peer review, ‘scientmanship’ that attempts to promote or suppress certain scientific views and scientists, and the recent quantification of social pressures in science from surveys that suggest that social pressures and career structures introduce behaviours that make science a difficult career for those lower in the scientific hierarchy, including racial and sexual biases, and can see those higher up using their prominence to affect how science is done and the claims made. I highlight that awareness of these negative social influences is starting to lead to approaches that aim to address these issues.
The British system of quality assessment of research in universities, known as the Research Assessment Exercise (RAE), has recently been the subject of major public policy review and debate. The system of research quality or performance assessment has been running for over twenty years, although many of its facets have changed as has the increasingly marketised political economy. Nevertheless, the UK RAE has been the prototype for the growth and development of such systems internationally, although how different countries have conceived of such forms of review has varied greatly. The question of the relationship between research quality in higher education and the public funding of research lies at the heart of what has become a contentious and acrimonious debate in the UK. While these issues can be seen as fundamentally about social and economic matters, in fact the social sciences as an organised group of subjects or interests have not played a key role in the public arena. This article outlines the contours of the recent debates in the UK, by comparison and contrast with the ways in which such systems of performance and quality assessment have been debated inter alia in Australia, New Zealand, France and the Netherlands. In essence, the issues have centred upon questions of measurement of performance known as metrication, and bibliometrics versus social judgments about research quality.
Publications have become the single most important factor of career evaluation in the social sciences, as well as in most other academic disciplines. This has in turn led some scholars to examine the existence of potential biases in peer-reviewed publications. Teele and Thelen (2017) have shown that political science is not free from such biases. This article examines publication patterns and the peer-review process for the European Journal of Political Research. It relies on data on more than 5000 submissions between 2006 and 2017. I look at possible biases at the different stages of the publication process: submission, desk evalauation, review and acceptance. Results show that the journal's processes are free from bias, but confirm that submission patterns remain different, despite convergence in recent years.
The ERC has been a pivotal innovation in the set of funding instruments that the European Commission has established for fostering research and innovation in the European Union. With more than 8000 projects funded so far, it is worth asking about empirical evidence regarding the ERC’s specific impact on the social sciences. This article provides some basic data, along with descriptive statistics, on the social scientists who have been sitting on ERC evaluation panels, and on ERC-funded research projects from the social sciences. The article ends with a discussion of the data and poses questions for further investigations.
This article reviews and evaluates the process and experience of embedding internationalisation in a UK, HEI School of Social Sciences curriculum. It reflects upon the merits and limitations of using of a peer-reviewed checklist approach compared with alternative approaches, such as the provisioning of online toolkits, resource lists and internationalisation centres or units. Through an evaluation of the peer-reviewed checklist approach, this article sheds light on the gap between the desire by institutions to embed a richer understanding of internationalisation into the curriculum, and the uptake, awareness and implementation of this material by academic teams on the ground. The article asserts that in order to be effective, efforts to internationalise the curriculum need to be embedded in routine academic processes. It favours a ‘mixed’ approach, suggesting that a predominantly quantitative checklist approach can, in certain circumstances, prompt further (qualitative) reflection and engagement by those using the checklists, and, by so doing, act as a tool for change rather than simply providing a means of assessing or measuring internationalisation.
Achievement of the full set of EU objectives in the long run requires basic and critical research in the social sciences and the humanities. A European Research Council (ERC) may offer economies of scale, the alleviation of coordination problems, and the provision of public goods or ‘club goods’ to the social sciences and humanities. It should focus on data sharing and large comparative projects; raising public awareness of the value of the social sciences and humanities, and funding basic and critical research in these disciplines – not just research offering immediate-term extrinsic pay-offs. In order to function properly, such a body should develop standards of assessment and peer review processes that are appropriate for research in the social sciences and humanities. An ERC must receive ‘fresh money’; it must minimise transaction costs – both to attract good applicants and to fund as many of them as possible – and, by giving priority to academic excellence over Lisbon relevance and geography, it must maximise its credibility as a supporter of high-quality research. At a time when competition is supposed to foster excellence in research, academies and private funding bodies must continue to be competitors of the European Research Council.
The article describes the history and the process of editing the Czech Journal of Political Science. First, it puts the journal into the general context of the development of Czech political science. The journal is now one of three well-established political science journals in the Czech Republic, and invites contributions in all subfields of the discipline. The article discusses the limitations of a social scientific journal published predominantly in a small-nation language. Second, it focuses on the particular steps of the editing process. It primarily describes the peer review process.
Publication in academic journals is a critical part of the academic career. However, writing academic papers and getting them published is not a straightforward task. This article seeks to provide editors’ insights into the process of publishing by outlining common factors that lead to papers being rejected as well as charting strategies that ensure papers have the best chance of being sent out for review. The article discusses the important issue of peer review, including how best to respond to reviews and the expected academic conventions in terms of acting as reviewers.
After many years of having a loosely structured thesis seminar for our senior majors, the Tri-College Linguistics Department recently redesigned our program to offer students a highly scaffolded environment in which to complete their capstone requirement, which has led to improved outcomes. We argue here for the benefits of asking students to write a senior thesis and to carry out original, authentic research on a topic of their choice. We describe our seminar design and its key components—frequent incremental assignments, peer and instructor feedback leading to repeated revisions, and intentional community building—and suggest how the program might be implemented, in whole or in part, at other institutions with similar pedagogical goals.
European political scientists lag behind their US counterparts when it comes to publication in peer-reviewed outlets and for many established academics publication declines as they reach more advanced stages of their careers. I attribute this mainly to a lack of incentives to publish more, and through better channels. Based mainly on recent Norwegian developments, I acknowledge that efforts are being made to improve the situation, but argue that more can be done by universities, research institutes, and research councils.
The phenomenon known as emergency eLearning saw many institutions of higher education switch from face-to-face learning to virtual or online course delivery in response to the COVID-19 pandemic. The transition posed a unique suite of challenges to instructors and students alike, especially in the case of active learning pedagogy. This article reflects on the experiences of a multi-institutional, multi-term pedagogical project that implemented peer review assignments as opportunities for asynchronous but nevertheless active learning. We shared instructor experiences through the course design and application stages of courses in International Relations and political economy, discuss the ability of peer review assignments to create active learning opportunities in online courses, and reflect on our own pedagogical development benefited from the community of practice.
From academic years 2011–2012 until 2015–2016 (inclusive), the authors developed an innovative formative peer review assessment strategy to build undergraduate students’ academic writing skills within the framework of a second year introductory International Politics module. This involved students anonymously reviewing assigned fellow students’ draft essay introductions and indicative bibliographies, supported by a bespoke rubric delivered via Turnitin Peermark. This article recounts the educational research-driven rationale underpinning the peer review educational design and implementation in the International Politics module, before qualitatively exploring its perception and reception by learners through key “student voice” data, complemented by commentary from learner focus groups. Following the best traditions of learning and teaching articles in this journal, we conclude by sharing the challenges and benefits of implementing such a formative assessment strategy. We also offer practice-based advice, drawn from our experiences, for colleagues who may want to emulate our approach, and we acknowledge the limitations of our qualitative practice-based study alongside a potential avenue for expanding on this study.
The gender gap pervades many core aspects of political science. This article reports that females continue to be under-represented as authors and reviewers in European Union Politics and that these differences have only diminished slightly since the second half of the 2000s. We also report that females use more cautious and modest language in their correspondence with the editorial office, but do not find evidence that this under-studied aspect of the gender gap affects the outcome of the reviewing process. The authors discuss some measures European Union Politics and other journals might take to address the imbalance.
Most European universities lag behind the best universities in the Anglo-Saxon world. A key challenge is to raise resources per student in Europe to US levels. The Lisbon agenda demands fundamental reform of the European university system in order to enhance efficiency, yet avoid grade inflation, to foster more competition, to allow for much larger private contributions accompanied by income-contingent student loans, and to attract larger numbers of foreign students. European universities will be pushed to compete with each other, to offer better incentives and to generate substantially more income. Universities will be stimulated to provide sufficient diversity and quality to meet the demands of a growing and diverse student body. Their ambition should be to educate the best minds in society irrespective of whether their parents are rich or poor, academically inclined or uneducated. A shift from grants to loans and an increase in tuition fees are justified by high returns. Reform should lead to a better and more equitable system of European universities.
Peer review is part of the bedrock of science. In recent years the focus of peer review has shifted toward developmental reviewing, an approach intended to focus on the author’s growth and development. Yet, does the focus on developing the author have unintended consequences for the development of science? In this paper, we critique the developmental approach to peer review and contrast it with the constructive approach, which focuses on improvement of the research. We suggest the developmental approach, although with laudable aims, has also produced unintended consequences that negatively impact authors’ experiences as well as the quality and meaningfulness of the science published. We identify problems and discuss potential solutions that can strengthen peer review and contribute to science for a smarter workplace.
The peer review process is fundamental to academic publishing, guaranteeing the integrity and quality of the research upon which we depend. However, it is also infamous for its sluggishness—occasionally excruciatingly so. For numerous authors, the prolonged wait for feedback on their articles might seem interminable, particularly when they are enthusiastic about disseminating innovative discoveries to the public. But why exactly does peer review take so long? The reasons are complex and multifaceted, involving challenges faced by editors, reviewers, and authors alike. By understanding these challenges, we can start to see the bigger picture and work towards solutions that might speed things up.
“Patience requires knowing not just the cost of delay, but also the benefit of delay”
“The two most powerful warriors are patience and time.”-Leo Tolstoy
“Lost time is never found again.” - Benjamin Franklin
The chapter discusses the challenges associated with policymaking based on evidence, emphasizing the need for individuals to become informed consumers of research. It highlights the complexities of science, underscoring that evidence alone cannot dictate policy decisions due to uncertainties, differing interpretations, and the influence of values. The role of media in distorting scientific information is addressed, citing examples like cell phone use and cancer or video game violence. The passage advises readers to be skeptical, check original sources, and consider limitations of research. Conflicts of interest are introduced as potential sources of bias, with transparency and disclosure seen as essential safeguards. The chapter further stresses the importance of discerning the quality of research, considering statistical significance and effect size, understanding random clusters, and addressing issues of reproducibility and replicability. Finally, it discusses the prevalence of scientific misconduct, its impact on public trust, and measures being taken to detect and prevent it, including training, open science practices, and clear policies for addressing allegations.
After Circular A-4 “Regulatory Analysis” had served various Presidential administrations for 20 years, in 2023 revisions were proposed and made to update and modernize regulatory guidance. At the behest of the Office of Information and Regulatory Affairs within the Office of Management and Budget, peer reviewers were nominated, and a peer review of proposed revisions was organized. Joseph Aldy, Cary Coglianese, Joseph Cordes, R. Scott Farrow, Kenneth Gillingham, William Pizer, Christina Romer, W. Kip Viscusi, and I were selected. The consequent peer reviews are the focus of this synopsis. After reading the comments from my fellow peer reviewers, I was impressed with the careful, thoughtful advice given. When asking nine peer reviewers with various backgrounds to comment on proposed revisions to guidance on regulation, we might expect to get at least 10 different views. Yet, I sense basic agreement on several key aspects of the topics of the notable proposed updates. The degree of consensus is reassuring. Less reassuring, and counter to the actual revisions adopted, is that the peer reviewers mostly agree that fundamental aspects of the updates on the discount rate, distributional analysis, and scope are ill-advised.
After reviewing a wide range of topics, we conclude that good science requires greater efforts to manage biases and to promote the ethical conduct of research. An important problem is the belief that randomized controlled trials (RCTs) are exempt from systematic bias. Throughout the book, we acknowledge the importance of RCTs, but also emphasize that they are not immune from systematic bias. A second lesson concerns conflict of interest, which must always be taken seriously. Most large RCTs are sponsored by for-profit pharmaceutical companies. We identify leverage points to address these problems. These include cultivating equipoise – the position that research investigators enter a study with the understanding that either a positive, negative, or null result is of value. We return to several other themes prominent throughout this book, including the reporting of research findings and serious problems with our system of peer review. The book concludes with recommendations for reducing conflicts of interest, improving transparency, and reimagining the peer review system.