Peer review is arguably the cornerstone of scientific advancement. It is also one of the social responsibilities of scientists: Our work would stop dead in its tracks without scientists’ reviews of each other’s research. In particular, peer reviewers recommend which research proposals are funded and which articles are published in academic journals. But peer review is also deeply enmeshed in the crises confronting researchers today – crises implicating the funding and publication processes.
Academic publishing has strayed far from its informal and nonprofit roots, with damaging consequences. Today, journal publishers are making money hand over fist by abusing the generosity of the scientific community, universities, and the public at large. Universities are forced to spend millions of dollars for subscriptions to scientific journals, so that their faculty and students can access the very research these universities already support. And researchers themselves face extortionate fees to ensure that their published work is widely available. All this, and the publication process does not even work terribly well. Publication lead times are long, journals are biased toward positive findings and driven by citation metrics rather than scientific merit, and the system invented by commercial publishers has opened the door to predatory journals that scam scientists and waste their time. Meanwhile, experts are rebelling, increasingly refusing to participate in a peer review process that publishers have hijacked for their own interests.
In this chapter, we break down the current state of affairs and how we got here. A little bit of history reveals how the pathologies in the system accrued and what might be done to remedy them. The fact is that the publication system has been broken for decades: It was broken deliberately, by the publishers, who realized that they could make money by breaking it. Scientists have long recognized this and sought to do something about the situation. We can take inspiration from these past efforts as we try to design a structure of scientific communication that comports with the ethical responsibilities of scientists and successfully advances the public good.
How the Growth of Universities Provoked a Peer Review Crisis
The origins of the peer review and publication crises are intertwined with the growth of research institutions. This expansion is a product, above all, of federal financial support. The first grants awarded by the federal agency that became the National Institutes of Health (NIH) were distributed in 1918, to study venereal disease. These awards were described as “establishing a precedent for the Federal Government to seek assistance of scientists through grants.”1 Today, most scientists would say that they seek support from the federal government rather than the other way around! Expectations – and processes – have shifted dramatically.
Today’s NIH system started coming into focus in 1946, with the establishment of the Division of Research grants, known since 1997 as the Center for Scientific Review.2 This body manages peer review of most of the proposals submitted to the NIH. Of special importance in this regard are study sections, which were also formed in 1946.Reference Van Slyke3 Study sections are tasked with reviews of most research applications the NIH receives. The first study sections were described as consisting of “outstanding civilian scientists … with full authority and responsibility to make expert recommendation as to whether a research project application is acceptable and can be supported” (p. 561).Reference Van Slyke3 Nearly 80 years later, this remains a fairly accurate characterization.
The year 1946 is not coincidental: The US research enterprise underwent radical expansion immediately after World War II. The war had made plain to Americans the strategic importance of developing economic and military advantages. At the same time, the incredible advances in weapons technologies between World War I and World War II – above all, the development of the atomic bomb – made clear that these advantages were products of basic and applied research. Awed by what science had wrought, and moved by a sense that the United States could do great good in the world, politicians and the public were prepared to support innovations not only in war but also in fundamental knowledge, civilian technology, and human health.
The leading figure within the administrative sphere was Vannevar Bush. Bush had been the chief US wartime research and development advisor, working with Presidents Franklin Roosevelt and Harry Truman.Reference Wiesner4 Thereafter, he was an aide to President Dwight Eisenhower. In peacetime, he continued to press the need for scientific development in order to protect national security. But his vision was larger than even that enormous task: He had plans to build up a national research establishment that could address a wide and endlessly flowing stream of important scientific questions.Reference Bush5 With this in mind, Bush encouraged the White House and the Congress to establish well-capitalized federal agencies that would fund university-based science. In this way, existing institutions, which the public was familiar with, would be deputized to do the public’s work. The institutional results are still with us: Bush was instrumental in the creation of the National Science Foundation and the expansion of the NIH, which was predicated on the notion that laboratory science would lead to spectacular improvements in human health.
Bush could have pressed for government laboratories, but he believed that high-quality research was more likely to thrive in universities. Huge amounts of money were therefore allocated to transform universities into research-intensive institutions.Reference Shaw6 Grants supported not only specific projects but also the costs of running and growing universities. This is the origin of indirect recovery payments, better known as research “overhead” – a sum of money attached to most federal grants, which pays for the costs of laboratories and helps institutions cover ordinary infrastructural expenses. For example, overhead dollars have been used to fund university libraries, which are certainly useful for scientific researchers, but not only for them.
The massive expansion of research has done the United States tremendous good; its universities have produced enormous advances in the natural and social sciences, engineering, and the humanities, in the process becoming global icons and models. Then too, the global impact of research occurring at US universities is not to be underestimated. But the scale enabled by plentiful funding also has its downsides. As the research enterprise swelled, it became unwieldy. As we discuss in Chapter 12, there is now more knowledge than ever, but that knowledge is so abundant that is hard to synthesize or make sense of in toto. More researchers also means more specialization and greater need of specialized publication outlets, resulting in a huge profusion of journals, producing a corpus of research far too vast for anyone to read. More productivity means more research to review – so much that, as we detail later in this chapter, experts cannot keep up. And with so much money sloshing around universities, higher education has become an easy target for exploitation by clever entrepreneurs, such as the businesspeople who now run the academic publishing industry.
Who Does (and Doesn’t) Do Peer Review, and Why
Before turning to the troubles accruing in academic publishing, let’s take a moment to look more carefully at peer review. Peer review is, we have suggested, a duty shared by all scientists – and most scientists agree.Reference Gallo, Thompson, Schmaling and Glisson7, Reference Mahmić-Kaknjo, Utrobičić and Marušić8 It is also an opportunity.
Reviewers are the first eyes on new research results and ideas for new research projects – they are on the cutting edge, assessing the latest developments in their fields. They also get a glimpse of the novel directions their field may be taking. Reviewers who participate in grant review study sections may make connections that are valuable for their careers. There is evidence showing that participation in peer review is useful for developing research collaborations and meeting fellow researchers who go on to provide letters of recommendation that help their peers win awards and promotions.Reference Gallo, Thompson, Schmaling and Glisson7
If participation in peer review is both an obligation and an opportunity, it is also undoubtedly a burden. Conducting a thorough peer review requires a good deal of work and time. Peer review of a typical journal submission – assuming the reviewer is adequately familiar with the sort of research they are reviewing – takes between four and six hours.Reference Ware and Mabe9 Grant applications are longer and more complicated than manuscripts, and take even more time to review. Study section meetings are usually multiday affairs, and most participants will have to budget travel time. Lack of time is the most common reason that potential reviewers cite when declining review invitations.Reference Gallo, Thompson, Schmaling and Glisson7, Reference Mahmić-Kaknjo, Utrobičić and Marušić8
In fact, a majority of invitations to review are declined – a serious problem to which we now turn.
Reviewer Supply and Demand
With so many papers to publish, and each requiring at least two peer reviews demanding about five hours of work apiece, the total amount of time invested in peer reviews is staggering. And that is just for manuscripts. In 2022, about 460,000 scientific articles were published in the United States, 37.4 percent of them in the health sciences, for a total of about 172,000 articles.Reference Schneider, Alexander and Thomas10 Journal acceptance rates vary greatly, with the most selective journals accepting less than 10 percent of submissions, but on average journals accept about half of the submissions they receive.Reference Ware and Mabe9 In that case, we could reasonably estimate that about 344,000 health sciences articles were submitted to US journals. We are obviously simplifying, but 344,000 manuscripts, each absorbing ten hours of review time, amounts to 3.44 million hours of review time. If you do the math on the 100,000 grant applications the NIH received that same year, a reasonable estimate is that the total time devoted their review amounted to about 3 million hours.11 If all of the roughly 200,000 full-time medical school faculty in the United States devoted an hour a week to peer review, they would have about 11 million hours to devote to a roughly 6.5 million-hour project of peer review.12
But most faculty don’t spend an hour a week on peer review. They spend their time on other demands, such as research, teaching, clinical care, and service to their institutions. And in science and technology fields – including, and perhaps especially, health fields – they may also be involved in commercial projects. That’s why it’s hard to get potential reviewers to sign on to this responsibility.
One of us (Kaplan) recently interviewed six editors of leading journals in a variety of health fields, including internal medicine, psychology, public health, and behavioral medicine, and one editor in economics. These editors consistently reported that it has grown much harder to find willing reviewers. Whereas 25 years ago these editors could expect to approach five distinguished scholars and wind up with two or three completed reviews, now they send 15 or 20 invitations before receiving two completed reviews, and rarely from the most recognized scholars. Top experts usually don’t complete reviews. It should be no surprise, then, that these editors also report reduced review quality.Reference Tite and Schroter13, Reference Willis14 And the situation appears to be similar with respect to grant peer review. One of us (Schmaling) asked two NIH peer review administrators to estimate the proportion of potential peer reviewers who accept invitations. They both said around 25 percent, with one clarifying that their estimate would apply to reviewers who do not know them.
Notably, one of these NIH administrators pointed out that the acceptance rate is around 50 percent when they reach out to potential reviewers with whom they have a preexisting relationship. This is a sign of broad problem: The reviewer pool is shallow. Consider: Peer reviews are happening – those hundreds of thousands of articles are being published – but the review load is not spread evenly.Reference Gallo, Thompson, Schmaling and Glisson7 It is fair to speculate that a large contingent of potential reviewers are not asked to review (though they may be prepared to do so), as journal editors are likely to go “back to the well”: Because it is hard to get scientists to accept review invitations, those who do accept mark themselves as willing, and busy editors will seek them out first.
Studies have found that a small contingent of researchers handles the bulk of review.Reference Gallo, Thompson, Schmaling and Glisson7 Other data bear this out. A study of 2,329 journals published by the major scientific publisher Elsevier found that a large majority of invitations were not accepted. This study broke down acceptance rates by gender, finding that 36.3 percent of women accepted invitations to review and 39.7 percent of men accepted.Reference Squazzoni, Bravo, Grimaldo, García-Costa, Farjam and Mehmani15 Another study of 21 BMJ Publishing Group journals, also broken down by gender, found that women accepted 32.6 percent of review invitation and men 40.2 percent.Reference Messaoud, Schroter, Richards and Gayet-Ageron16 The reduced rate of reviewing by women may reflect greater burdens on women’s time, as women typically devote more hours to housework and childcare than men do.
We are sympathetic to the dilemma that editors and review administrators face: They need reviews to get done, and one way to accomplish that goal is to repeatedly tap the shallow pool of reliable peer reviews. But this group is overburdened and burned out.Reference Gallo, Thompson, Schmaling and Glisson7, Reference Severin and Chataway17 This problem is in dire need of a solution. The reviewer pool must be deepened, and incentives should be considered.Reference Ellwanger and Chies18 We offer suggestions later in this chapter.
The Crisis in Academic Publishing
For most of the history of science, research communication was informal. Research was presented at scholarly societies or described in private letters. Journals, which became more common in the early twentieth century, were usually published by university presses or scholarly societies. However, during the second half of the twentieth century, commercial publishers discovered the profitability of journal publishing.
Journal publishers have thrived when publishers of other periodicals have faltered because they are able to take advantage of an extraordinarily forgiving business model: Journal publishers get all of their contributions and expert editing for free. Most magazines have to pay editorial staff and writers. Academic publishers get article manuscripts for free, peer review for free, and expert editing for free, the latter supplied by scholarly societies with whom the publishers partner.
Academic publishers do bear some costs, such as for copyediting, but they have lucrative sources of income that more than offset their expenses. Although data are hard to find, it has been reported that academic publishing is one of the world’s most lucrative industries in terms of profitability, with profit margins approaching 40 percent.19 Compare that that to the 2023 net profit margin for Netflix (10.6 percent) or Amazon (6.25 percent). Even Apple, the world’s second-largest company, had a lower margin (28.4 percent).Reference Buranyi20 Fossil fuel giants are far less profitable than academic publishers: In 2023, Shell Oil reported their highest profit margin in a decade (7.27 percent). Meanwhile Elsevier reported a 10 percent increase in profits in 2023 and a 37 percent profit margin.21
How have academic publishers managed to rake in such large profits? Incredibly, they not only get their contributions at no cost, but they also charge their contributors for the privilege of publishing. Then too, academic publishers charge subscription fees to the institutions – typically universities – financially supporting the research that supplies the articles those publishers then sell. Academic publishers also commonly require that contributors sign over copyrights to their work in exchange for publication, so that the publishers – not the authors – can charge fees for reproducing the authors’ work. As copyright holders, publishers also have sole rights to repackage research into, for instance, database products. Such ancillary products can produce substantial returns.
The system is truly a vision of absurdity. Academic publishers provide logistical assistance in publishing; they do none of the intellectual work. And they reap huge profits.
Let’s dig a little deeper into two sources of publisher revenue. One is fairly straightforward: subscription fees. Publishers charge university libraries big money to enable affiliates access to their copyrighted material. Between 2010 and 2019, UK universities spent over £1 billion on academic journals, and US institutions spent many times more.Reference Grove22 A small portion of these fees is paid to the scholarly societies who sponsor journals, and the rest goes to publishers’ revenues. It is hard to say how much any given institution pays, as each is forced to negotiate its own rates in secret with the publishers. Although prices are not disclosed, one investigation found that the same package of journals cost one university twice as much as another, a difference of nearly $1 million in 2014.Reference Bergstrom, Courant, McAfee and Williams23 Nontransparency gives publishers greater flexibility to charge the most they can. It may seem reasonable to charge universities for journal access, but remember: Universities fund the research that journals publish. (Of course, much university funding comes from public money, which means that taxpayers are ultimately paying for publisher profits.) A nominal fee to cover publication costs is warranted, but a fee sufficient to generate profits is another matter.
The same is true of article processing charges (APCs), the fees publishers charge researchers to publish research in journals whose prestige and utility derive from peer review and scholarly society expertise for which publishers pay nothing. The justification for APCs is that they are supposed to subsidize the cost of publishing, so that publishers can make materials more widely accessible. This is a laudable goal, and raising funds to realize that goal is fair. The question, as with library subscription fees, is why APCs should be so high as to furnish enormous profits for scientific publishers.
APCs usually range from $1,500 to $3,000, but can be as high as $11,000.Reference Else24 Leigh-Ann Butler and colleagues estimated that authors paid the six largest publishing companies $1.6 billion in APCs between 2015 and 2018.Reference Butler, Matthias, Simard, Mongeon and Haustein25 The impact of APCs is often devastating for researchers with less money to spare, such as young authors, and may completely close the door to publication for authors from the developing world.Reference Seguya, Salano and Okerosi26 Publishers are able to get away with high APCs because scholars have no choice but to publish peer-reviewed research: That stamp of approval gains a scholar recognition, job offers, and promotions, and without it, one cannot meaningfully participate in their chosen field.
APCs are charged to authors, though sometimes they are paid by authors’ employers or funders. Again, this typically means that taxpayers are paying for publisher profits. Then too, graduate students and junior faculty are often not supported when it comes to APCs. They are on the hook to pay fees themselves, and they must cover the charges or their careers could be stalled. When universities do pay APCs, they have less to spend on faculty salaries, library collections, and student services, among other important costs.
Björn Brembs and colleagues have described the dynamic that results from the perverse relationship between journal publishers and academic institutions.Reference Brembs, Huneman and Schönbrodt27 Publishers overcharge institutions for subscriptions or for APCs, depriving institutions of resources to support original investigative work. This makes life harder for faculty – the very people who do all of the meaningful editorial work involved in publishing, who have no incentive to invest in that labor other than a sense of responsibility to contribute to their professions: Scholarly societies want to be associated with reputable journals, so scholars do the scientific work. These scholars, then, have dwindling support and increasing demands on their time: Remember the number of article submissions is rising. Institutions and their students suffer while faculty work overtime to provide free services to commercial publishers. This also distracts from time available to produce high-quality replicable research, increasing the burden on reviewers, who are spread too thinly as it is. This in turn further increases burdens on journal editors, who must waste time finding more potential reviewers. Alternatively, they go back to their shallow pool of willing reviewers, harming the products of research.
How Academic Publishing Became So Profitable
Why are scholars working themselves to the bone, and universities and vulnerable early-career scholars turning over big fees, to furnish the coffers of for-profit publishers that undermine the research enterprise? Answers lie in the history of scientific publishing.
Today’s business model originated with the British entrepreneur Robert Maxwell. After World War II, Maxwell, a decorated veteran, began working in scientific publishing. His early exposure to the discipline produced two important realizations. First, university and professional societies were doing a crummy job as publishers. It took too much time to review manuscripts and bring them to press, the resulting product looked cheap, and journal distribution was meager. Second, Maxwell realized that American universities were flush with cash thanks to the huge overhead payments they were receiving, and a good deal of this largesse was going to fund libraries. Libraries, then, could afford to buy a better journal, and even pay a premium for it. Maxwell recognized the brewing of a perfect storm. He joined forces with Paul Rosbaud, an experienced editor, and founded Pergamon Press.Reference Cahn28
Pergamon’s plan wasn’t limited to wider, timelier distribution of better-looking journals. The company also looked to expand the number of journals on the market, so they could sell more of them.Reference Clarke29 Maxwell and Rosbaud marketed their publishing services to leading scholars, encouraging them to start journals. By 1959, the company had 40 titles in its portfolio. As Pergamon gained clout and economies of scale, professional societies began signing up.
An industry grew up alongside Pergamon, and by 1970, commercial publishing had essentially conquered scientific publishing. With traditional university presses and scholarly societies less in the game, commercial publishers had free rein to boost prices and profits: Between 1975 and 1985, academic journal subscription costs doubled. By May 1991, Pergamon Press was so valuable, and its potential still so vast, that Elsevier bought the company for $817 million. The purchase included 418 journals and global publishing rights.Reference Miranda30 All this was before ACPs emerged as a major profit center for publishers.
While they pushed new journals and took over old ones, Maxwell, Rosbaud, and other commercial publishers also pioneered methods for elevating the popularity of their properties – methods that, according to critics, distort the scientific process.Reference Young, Ioannidis and Al-Ubaydli31 Publishers recognized that new metrics, such as impact factors, could increase the visibility of their journals, which they could in turn parlay into sales. They pushed these metrics hard, helping to cement their importance in the field and jockeying for ownership of the highest-impact-factor journals. They also pressed for greater control of editorial operations at prestige outlets such as Cell, Nature, and Science, using their influence to focus these journals on high-citation-rate articles.Reference Henderson32
The financial success of Pergamon, Elsevier, and others has had another pernicious side effect: It has inspired and enabled the proliferation of scam journals. With so many scientific journals out there, it is easy to introduce into the market “predatory” journals, which use deceptive practices, such as falsely stating they use peer review, to entice unsuspecting researchers to submit manuscripts and, with them, APCs.Reference Amsen33 Then too, predatory journals contribute to the peer review crisis when they do seek reviewers, something they do indiscriminately because they are only seeking to generate the patina of legitimacy. On a recent and typical day, one of us (Schmaling) received 27 emails from predatory journals with invitations for peer review and submission – for a fee. With this volume of spam, it is possible that valid peer review invitations are overlooked, contributing to the lack of participation.
It must be said that scientists have not stood idly by while entrepreneurs and scammers have turned their research into profits. One opposition figure is Harold Varmus. In contrast to Maxwell, Varmus is a scientist. He earned his MD from Columbia Medical School, completed his residency, and moved on to NIH, where he did basic research. From there, he settled into the academic ranks at UC San Francisco and eventually gained one of the most recognizable names in science. In collaboration with Michael Bishop, Varmus was awarded the 1989 Nobel Prize in Physiology or Medicine for the discovery of the cellular origin of retroviral oncogenes. Three years later, President Bill Clinton appointed him director of NIH, a position that he held until December 1999. In 2010, he returned to NIH as director of the National Cancer Institute, a role in which he served until 2015.34
As a practicing scientist, Varmus understood the frustrations of publishing. He was regularly asked to review manuscripts for free, and saw how hard his colleagues worked on behalf of others’ profits. In particular, he worried that NIH was spending billions of taxpayer dollars to support the world’s best researchers, yet their published work was incredibly onerous for taxpayers to access. Commercial publishers held the copyrights and charged exorbitant fees to members of the public who sought to read the very research they had paid for.
Varmus’s solution was open access journals, the contents of which would be available to anyone who wanted to read them. As NIH director, he launched a campaign to create a freely accessible online library and publication platform, E-BioMed. The platform would host peer-reviewed journals and would also research reports vetted by credentialed scientists using NIH criteria – an alternative publication pathway that did not involve traditional peer review. Response to Varmus’s proposal was swift and largely unfavorable.Reference Delamothe35 Scientists argued that NIH was trying to take over the publishing industry and to elevate the status of papers that had not been properly reviewed. Further, Varmus did not have a clear plan for sustaining the effort financially. Professional societies, which had become dependent on revenues from journal publications, also pushed back, arguing that they could be trusted to do the right thing on behalf of the scientific community.
By the end of 1999, with Varmus’s tenure as director winding down, NIH shifted to a less ambitious program known as PubMed Central. This would be a repository for scientific reports, which may or may not have been peer reviewed. The repository would be fully accessible to the public, although copyrights would be retained by journals. With colleagues Michael Eisen and Patrick Brown, Varmus also started the Public Library of Science (PLoS), which today is a nonprofit publisher and host of freely and fully accessible peer-reviewed journals.
Ideas promulgated by Varmus and fellow open-access supporters stimulated some important legislative proposals, including the Federal Research Public Access Act, introduced between 2006 and 2012, and its successor, the Fair Access to Science and Technology Research Act (FASTR), introduced three times between 2013 and 2017. FASTR would have mandated free public access to taxpayer-funded research, requiring authors to deposit articles in accessible databases within 6 or 12 months of publication, depending on which version of the bill one consulted. The bill would also have created a new kind of license to replace traditional copyright, allowing reproduction of scientific materials. So far, the arguments of opposition lobbyists seem to be winning the day, and legislative change remains elusive.
Fixing Peer Review, Restoring Academic Publishing
The relationship between academic researchers and for-profit publishers is not sustainable. Scholars increasingly refuse to provide peer review for free, and APCs jeopardize careers and discriminate against institutions and faculty with smaller budgets. Meanwhile, publishers use nontransparency – which is antithetical to research ethics – to boost their bottom lines, and have arguably corrupted investigators’ work by instituting systems that prioritize citation numbers over scientific merit. Other publishing models can do everything that the current model does – and more, including securing open access – without sacrificing money and high-quality science on the altar of profit.
How to sever this dysfunctional relationship?Reference Kaplan36 One approach we do not recommend is abandoning peer review. The problems with peer review are not inherent in the project of collective oversight by the scholarly community. Quite the contrary: We believe that high-quality peer review is the best tool available for protection against bias. The problems afflicting peer review stem largely from its role within the commercial publishing system and from biases against fellow researchers. We know, for instance, that women and minoritized researchers are underrepresented as reviewers, which may have something to do with apparent biases against these groups in publication,Reference Holman, Stuart-Fox and Hauser37, Reference Hopkins, Jawitz, McCarty, Goldman and Basu38 and in grant awards.Reference Schmaling and Gallo39, Reference Ginther, Kahn and Schaffer40 We also know that the publication process today does not encourage widespread participation in peer review. Instead, overworked editors return over and over to the small group of researchers willing to take part, in hopes of meeting publication deadlines while sorting through the glut of submissions.
A number of strategies for expanding and diversifying the reviewer pool have been proposed. Professional societies could identify potential reviewers and track their contributions.Reference Kaplan36 Submitting articles could be contingent on completing the target number of reviews. In addition, professional networks could be activated to identify and mentor new reviewers.
All of this would likely be easier to accomplish if faculty knew that their uncompensated labor was being put to the best possible use by publishers committed to the widest possible distribution of useful knowledge at the lowest cost – not to inflating costs in order to realize profits. The results would be good for research quality. Consider: Scientists who know the sting of APCs feel little obligation to work on behalf of publisher bottom lines, thereby draining the pool of peer reviewers and fostering bias and narrowness. Excessive burdens on willing reviewers lead to burnout and further quality problems: Burned-out reviewers eventually stop reviewing, so the pool gets shallower still, leading to more burnout, even less reviewer capacity, and even more deficits in published research.
Unwinding the present architecture of scientific publication is possible but will necessitate a concerted and genuinely idealistic effort. Yet such an effort is necessary, and a formula for success is hardly unimaginable. Harold Varmus and his generation of reformers had good ideas; the obstacles they faced were motivated by fear and conflicts of interest and were not structural. Some of their ideas have already been implemented effectively – preprint servers provide access to nonpeer-reviewed prepublication research reports, as E-BioMed would have.
A national or global digital library that would edit and post peer-reviewed scientific papers is also possible. This would require a substantial expansion of university libraries and the work of professional librarians. Multi-institutional consortia would address a peer-review responsibility that is too large for any one institution. Oversight could be provided by a distinguished nonprofit entity, similar to the US National Academies. Professional societies representing the interests of each academic discipline would define the content of electronic publications and appoint editors and editorial boards.
Significant financial support would be needed to launch and maintain an enterprise with the capacity to publish at scale. To raise these funds, university libraries, professional societies, and research institutions could be billed based on their pro rata usage, with discounts offered to poorly resourced institutions. Institutional subscriptions could also be discounted in exchange for employee contributions to editorial service and peer review, with the goal of stimulating participation, reducing costs of searching for willing participants, and reducing time wasted in securing reviews. Further, individuals from nonacademic institutions could purchase subscriptions or articles at cost. The costs of such a library and publishing process should be a fraction of that imposed by today’s commercial publishers because the system would not build in a profit margin. And there is every reason to believe that the quality of publications would increase under a regime that scientists could actually feel good about participating in.
Conclusions
There might be justification for commercial scientific publishing if for-profit companies added value to the process. Profit would be the price paid for the benefits of commercial involvement. But for-profit publishers don’t add value. They don’t increase the rate of scientific progress and they don’t improve the quality of scientific work. Not only do publishers have no responsibility for pursuing that aim, which is left to scientists themselves, but they also actively work against it by encouraging the development of unnecessary journals and predatory scams.
Yet we are not here to argue that commercial publishers should provide value enough to justify their profits. If they could, then a nonprofit system could still provide the same value at lower cost by forgoing profits. More broadly, knowledge is a public good, and it should be treated as such. This view is consistent with the ethical obligation of researchers to operate transparently and to participate eagerly in the group enterprise of sharing knowledge and evaluating it collectively, for the benefit of all. By contrast, a system that requires scientists to pay fees in order to share knowledge is antiscientific. That is the system we have now, but it need not be the one we have forever (Box 11.1).
Universities and scholarly societies should reconsider their relationships with commercial publishing houses.
Authors should question giving away the copyright for their published work to for-profit corporations.
Maintaining the peer review system is important for the future of high-quality scientific reporting, but the reviewer pool needs to be broadened, perhaps using incentives for participation.
Scholarship should be considered a public good, and should be accessible to as many readers as possible.