A few years ago the first author had a weekly meeting with a graduate student who was among his best students ever. She had arrived with a Master's degree in hand and, in short order, she wrote a review paper, an empirical article that was about to be sent off for possible publication, and defended her dissertation prospectus. The agenda for our meeting was to consider predictions made by alternative models that she would evaluate for her dissertation.
Meetings with productive graduate students are the meetings faculty most look forward to, and this was no exception. Before talking about her proposed dissertation, however, the student stated that she had heard from a journal. It took a second to remember that she had sent her review paper to a journal about four months ago. This was the first article she had written for a psychology-related journal, and would be her first sole-authored publication if accepted.
Unfortunately, the news was that her article had been rejected. She handed over the letter from the editor and the sole review. The editor thanked her for sending the manuscript to the journal, and wished her well in revising the article for possible publication elsewhere. The review began in a begrudging, complimentary fashion, noting that the writing was “pretty good” and that the conceptual views reviewed in the manuscript were “interesting.” In the course of my university career, the first author has read several thousand reviews, either as part of the review process for someone else's work or for my own. Although one can try to read between the lines too much, one could sense tension, as though the reviewer was angry about something.
What could the problem have been? The article wasn't overly long – something that does tax reviewers who have many demands on their time. The article wasn't critical in the sense of challenging or attacking someone's work or cherished position, so it didn't seem possible that she had offended someone. Although the point of the article was to bring material from other disciplines into a discipline that had not yet made these connections, the article seemed to be within the realm of what the journal purported to publish. So she shouldn't have been wasting the reviewer's time with a manuscript that the editor should never have bothered the reviewer with because it was a bad fit to the journal. The writing really was good, in terms of clarity, so the reviewer should not have become frustrated trying to figure out what the student intended to communicate.
Then came the sentences that made clear what the problem was. The reviewer commented on a conceptual problem, namely, that the manuscript had not included a view the reviewer believed to be relevant to the discussion. The reviewer pointed to a book that the student had not cited. A quick look at the editorial board of the journal revealed that the author of the book we had omitted mentioning was on the editorial board and was the likely reviewer of the student's manuscript.
Although one can't know what the outcome might have been otherwise, the student certainly hurt her chances by not citing potentially relevant work done by the reviewer. What made this lesson even more painful was the senior author's memory of sending off his first manuscript from graduate school. After a number of revisions, his advisor agreed that the manuscript was ready to be sent off and asked to see the guidelines to authors and editorial board composition of the journal the manuscript would be sent to. He reviewed the composition of the editorial board, looking for individuals whose work should have been cited but had not been. Had the senior author done this same exercise with his graduate student, the outcome might have been more positive. To add insult to injury, the topic of the senior author's dissertation (Wagner, Reference Wagner1987), and other work (Sternberg et al., Reference Sternberg, Forsythe, Hedlund, Horvath, Snook, Williams, Wagner and Grigorenko2000; Sternberg & Wagner, Reference Sternberg and Wagner1986; Wagner & Sternberg, Reference Wagner and Sternberg1985, Reference Wagner and Sternberg1987), was the importance of tacit knowledge (i.e., practical know-how that usually is not taught formally) to professional achievement in domains including academic psychology. The dissertation established that tacit knowledge about the ins and outs of publishing was related to outcome criteria, including number of publications and citation count. Had he learned nothing from his dissertation and from the wisdom of his helpful advisor?
We don't purport to have any special wisdom about publishing, which won't be much of a surprise, given the experience we just related about the graduate student's first psychology manuscript. It also is the case that excellent guidance on scientific writing exists, including the present book and others (e.g., Sternberg & Sternberg, Reference Sternberg and Sternberg2016). We do think some practical information about publishing, such as to avoid slighting an individual by not citing his or her work when you should have, rarely gets discussed. Perhaps the best example of such an omission is the topic of rewriting. Virtually no manuscript is accepted in original form. But in our experience, skillful rewriting – revising one's manuscript in response to feedback from editors, reviewers, colleagues, or even your own initial reading – is not fully appreciated.
Having had her first and certainly not her last taste of disappointment, it was time for the graduate student to roll up her sleeves and get on with the business of rewriting for publication. First on the agenda was to feature the missing conceptual view pointed out by the reviewer and look more exhaustively for others. A researcher who was invited to be a reviewer by the editor of the journal who had just rejected her manuscript could well be invited to be a reviewer by the editor of the next journal she tried.
Being Miserly with Words but not with Citations
Most drafts of manuscripts need a good bit of pruning. When writing for publication, economy counts. Every word must serve a purpose. Page limitations preclude irrelevant material. Although it is vital to be economical in choice of words and content, one nevertheless should be generous in attempting to relate one's own ideas to those of others.
When the first author's advisor scanned the editorial board to see if we omitted relevant citations of potential reviewers, he thought it to be a rather cynical enterprise. After all, if someone's work had not surfaced when writing the introduction or discussion, why stick it in now – besides the self-serving answer of wanting to improve the chances of having the manuscript accepted for publication? This reaction reflected two assumptions that are naïve if not erroneous.
The first assumption was that the identity of all related work would be known in the routine course of carrying out the research. Given the sheer volume of what is published annually, it is likely that an author will be unaware of relevant work of at least someone. Fortunately, having the bad luck of having one's work reviewed by that someone is a bit rarer. In the case of the graduate student, her area was an active one, with a large number of books and relevant articles. In her defense, the book she missed was not particularly recent or notable. But it was an omission. Most writers tend to overvalue the contribution of their own ideas and writing, and undervalue those of others. This bias, combined with the practical difficulty of keeping abreast of exploding literatures, makes it probable that one will omit potentially relevant work. As a beginning researcher and author, it is easy to get overwhelmed by the amount of potentially relevant literature. It can be difficult to determine how much searching for relevant literature is enough. Scanning the editorial boards of all of the journals in an area, as well as study section rosters for grant funding agencies, is one way to identify possible omissions of important sources because these individuals tend to be among the major contributors to their areas of expertise.
The second erroneous assumption was that reviewers would approach, if not actually achieve, complete objectivity in their appraisal of one's manuscript. By its very nature, writing a review represents a combination of subjective and objective elements. When a reviewer reads a manuscript that is perceived to be closely related to a contribution of someone else, and that contribution is not cited, it is hard for the reviewer not to infer that the author is a little arrogant, ignorant, or careless. What we know about human judgment and decision-making suggests it is likely that evaluation of a manuscript is colored by such inferences.
In retrospect, our changing view about the value of generously citing the work of others was predicted by results from the first author's dissertation (Wagner, Reference Wagner1987). Academic psychologists, graduate students, and undergraduate students were asked to rate the quality of various research-related strategies on two scales. An ideal scale represented the value of the strategy ideally, without regard to practical reality. An actual scale represented the practical value of the various strategies, given the world of academic psychology as it exists. The interesting finding was that discrepancies between ideal and actual ratings decreased with experience, being largest for undergraduates, next largest for graduate students, and virtually nil for experienced academic psychologists. Whether the convergence of actual and ideal noted for experienced academic psychologists represented (a) slippage in their ideals over time, (b) a selection factor, in that individuals whose ideals were closer to realities were more likely to select and stay in a profession, or (c) both, wasn't clear and still isn't. We now believe that generously citing the work of others is an important correction for a natural egocentrism that is common among writers, and fortuitously reduces the probability of offending reviewers who feel slighted by our omitting reference to their work.
Soliciting Feedback Actively
Part of the process of writing for publication is receiving feedback from reviewers and editors. But the amount of feedback obtained can be variable, and, in general, is paltry relative to the feedback available to those who seek it actively. Seeking feedback actively and incorporating it can improve your research and writing.
With the exception of the occasional sadistic reviewer, most of us find it more pleasant and easier to give positive feedback than negative. Consequently, negative feedback may be left unstated; however, with some diligence and tact, it often is possible to obtain negative feedback. Most colleagues and mentors are quite willing to be a source of additional feedback. Individuals are flattered that you value their judgment enough to ask them to provide comments on a manuscript. By explicitly asking for suggestions about how to make the manuscript better, you provide individuals with a comfortable way to convey negatives in a constructive way.
Another means of actively soliciting feedback is to seek out opportunities to give talks about the work and to seek out researchers in the audience who do related work for discussion after the talk. The obvious consequence of actively seeking feedback is an increase in the amount of feedback you get. Because not all feedback is valid, an increase in the amount of feedback you obtain means an increase in valid as well as invalid feedback.
Separating the Wheat from the Chaff
Feedback needs to be evaluated rather than accepted at face value. Reviews typically contain some good advice and some not-so-good advice. Given the amount of time you have spent on your project, and the comparably minimal time a reviewer spends, it is not surprising that some feedback will be unwise or simply wrong.
Beyond this, it is important to determine, as best one can, whether the editor and/or reviewers want a successful outcome for you, or whether they are convinced that your manuscript is a poor fit to the journal and their views are not likely to change regardless of revision. Ideally, but not always, the editor's intentions will be clearly spelled out in the action letter that accompanies the reviews. If the overall impression of the editor or reviewer is positive, he or she may try to provide recommendations and comments for revision in a genuine attempt to improve the final product.
On the other hand, the editor or reviewer may believe that the topic of the manuscript simply is not interesting enough to merit publication, given the existing literature. Many editors and reviewers are not comfortable condemning a manuscript outright for something that cannot be fixed, and, rather than doing so, they provide a laundry list of more specific comments and recommendations. They attempt to justify the global decision with specific comments and recommendations that really are chaff, as in the strips of metal foil released in the atmosphere from planes to inhibit radar, as opposed to feedback that, if implemented, would result in acceptance of the manuscript. Though this roundabout method of being rejected may seem discouraging, it is important to take the comments for what they are worth and use every opportunity to better your manuscript for the next time you attempt to publish it. As mentioned before, reviewers may be involved in multiple journal boards and could potentially see your paper more than once.
Anticipate Unintended Inferences
What does it mean to know a word? Does knowing a word mean that you can define it? Does it mean that you can use it in a sentence? Does it mean that you have relational knowledge about the word such as knowing antonyms and synonyms? Does it mean that you can choose the correct morphologically related variant of the word (e.g., run, ran, running) for a given context?
Given that humans use words to think as well as communicate, a study that adds to our knowledge about what it means to know a word would seem to be of profound importance and worthy of publication in one of the most prestigious and widely read journals. The first author and another graduate student thought they had such a study. There are existing measures of definitional knowledge, usage, relational knowledge, and morphological knowledge. Performance on such measures is correlated but not completely so, suggesting that these represent related yet distinct aspects of vocabulary knowledge. But we recognized a limitation in these measures that apparently had been overlooked. The measures had different vocabulary words on them. Differential knowledge of the specific vocabulary words that appeared on the different tasks could reduce the correlations among the tasks artificially. We had what we thought was the brilliant insight to do a study in which participants were asked questions that had them demonstrate their definitional knowledge, usage, relational knowledge, and morphological knowledge for the same vocabulary words, thus eliminating any effect of differential knowledge of specific vocabulary words on the correlations among tasks. We used a state-of-the-art model to show that each of the purportedly different kinds of tasks actually measured the identical underlying source of vocabulary knowledge. For the task of measuring vocabulary knowledge, the tasks were interchangeable because they all were measuring the same thing.
We sent our manuscript off to a very prestigious and widely read science journal and our manuscript was rejected. It was reviewed by researchers who studied morphological knowledge and who appear to have made an inference from our manuscript that we had not intended. The inference was that they were studying something that really did not exist. This happened on several occasions. We kept rewriting and submitting until it finally was accepted in a good journal but one less prestigious and more specialized than we believe was warranted given the contribution (Spencer et al., Reference Spencer, Muse, Wagner, Foorman, Petscher, Schatschneider, Tighe and Bishop2015). To get our manuscript published, we reconceptualized our findings in a way that made them more palatable to researchers who made a living studying one of these kinds of vocabulary knowledge and also helped eliminate unintended inferences. We suggested that the different kinds of vocabulary knowledge represented important facets of what it means to know a word, and that when one walks around the world and learns vocabulary words from context, one learns something about each of the different facets. Had we thought more carefully about unintended inferences, especially ones some reviewers would find to be highly threatening to their work, we might have been able to publish our manuscript in one of the earlier outlets we had tried.
Embrace Your Limitations
According to some researchers, there is a phenomenon called specific reading comprehension disability. It refers to people who struggle to comprehend what they read despite being able to read the words on the page. We were skeptical that this phenomenon existed, believing that individuals with so-called specific reading comprehension disability really had a general deficit in language that was not specific to reading comprehension. We had access to an ideal dataset to test whether the phenomenon was real, a dataset made up of hundreds of thousands of students who happened to attend schools that served students living in poverty. We first counted up the number of students who were poor at reading comprehension. Then we kept only students who were both poor at reading comprehension and adequate at reading the words on the page, which was about half the number we started with. Finally, we kept only students who were poor at reading comprehension, adequate at reading the words on the page, and also adequate in oral language as indicated by their performance on a vocabulary measure. Less than 0.1 percent of the original sample of students who were poor at reading comprehension remained.
We thought this was strong evidence that our scepticism was warranted. The reading comprehension problem was not specific to reading at all but rather a manifestation of a general language deficit. We presented a talk on this work at a conference and the audience included researchers who championed the phenomenon of specific reading comprehension disability. In looking around the room while giving the talk, icy stares and frowns from these researchers were noticed. During the discussion after the talk, none of them raised their hands to comment until the very end. Finally, one of the researchers made the comment that an important limitation to our study may have been our sample. Children living in poverty were overrepresented in our sample and general language problems are more common in children living in poverty. Maybe that is why we failed to find students whose comprehension problems were relatively specific to reading. We had to acknowledge that as a potentially important limitation in our study, and suggested that we need to do additional studies on other samples to address this limitation.
We wrote up the study, and featured this limitation prominently in a limitations section at the end of the manuscript (Spencer, Quinn, & Wagner, Reference Spencer, Quinn and Wagner2014). In a subsequent series of studies, we discovered that we were wrong about our earlier skepticism about the existence of individuals with specific reading comprehension disability (Spencer & Wagner, Reference Spencer and Wagner2016, 2018). Although individuals who are poor at reading comprehension despite being adequate at decoding do tend to have oral language problems, their average deficit in reading comprehension is three times greater than the average deficit in oral language. Consequently, the oral language problem is not a sufficient explanation of the reading– comprehension problem and something else must be contributing to the poor performance in reading comprehension. We missed this fact in our original study because we were just counting individuals instead of taking into account how severe their problems were.
For publishing our original manuscript, we were fortunate to have had a limitation pointed out to us before we wrote up our manuscript and we were able to acknowledge it. Often, additional limitations one hadn't thought of will come up as part of the review process and should be acknowledged. Although one might be tempted to hide limitations one is aware of rather than acknowledging them in a manuscript before sending it off for review, it is much better to acknowledge them than to have them discovered by reviewers. When you acknowledge even important limitations, you convey the message that your contribution is still important or you would not have submitted it for possible publication. When a reviewer discovers a limitation you haven't acknowledged, it can call into question the importance of your manuscript.
Perseverance
Despite the fact that publishing is one of the two principal ways that psychologists influence their respective fields (the other is influencing students who subsequently publish), why do so many reduce their rate of publishing – if not stop entirely – when tenure or promotion are no longer at stake? The answer in part may be the low rate of return for their effort, as well as the unpleasantness of the process of peer review. Most manuscripts are rejected for publication by the selective, valued journals in a given field. Rejection rates for APA journals, for example, typically are 80 percent or higher. Of the smaller proportion of manuscripts that are eventually published, few will have an impact, as measured by citation count – the frequency with which the article is cited by someone else who publishes an article, chapter, or book. In fact, the citation counts of the majority of scientists in any field including psychology are near zero. This means that in a typical year, someone who succeeds in publishing will not make reference to any publication of the typical researcher.
Nevertheless, a small proportion of individuals will have a profound effect on a field. The distribution of citation counts is highly skewed. Although the majority of scientists have citation rates near zero, a minority have citation rates in excess of 100 citations. In most fields that have been studied, the top 10 percent of most prolific contributors account for roughly half of everything produced, and the top 20 percent account for roughly 80 percent of everything produced (Simonton, Reference Simonton and Ericsson1996).
Dennis (Reference 164Dennis1955), for example, studied the distribution of productivity in seven fields: eighteenth-century American secular music; books in Library of Congress as of 1942; publications in the fields of gerontology and geriatrics; publications in the field of North American geology, 1929–1939; publications on infantile paralysis to 1944; entries in Chemical Abstracts, 1937–1947; and publications on linguistics, 1939–1947. The combined results were that the top 10 percent of producers contributed 50 percent of the work. For publications in the field of psychology, the top 10 percent of producers contributed 40 percent of the work, while the bottom 50 percent contributed only 15 percent (Dennis, Reference Dennis1954).
What is it that differentiates the top producers from the rest of the pack? One possibility is that they simply have more success publishing their work. In other words, their “hit rate” is higher because the quality of what they produce is higher. Early studies that examined ratios of contribution quality (i.e., those that had a measurable impact) to quantity (total number of contributions) suggested that this might indeed be the case (Dennis, Reference Dennis1966). However, these analyses were flawed in comparing quality of contributions from one database to total number of contributions in another (Simonton, Reference Simonton1988). When more recent studies compared quality and quantity of contributions for the same producers, a different pattern of results emerged (Simonton, Reference Simonton1985). This pattern of results is well described by the equal-odds rule, which holds that there is a relatively constant proportion of quality to quantity of contributions across individual researchers and within individual researchers across different time periods in their career. What differentiates top producers from the rest of the pack is not a higher hit rate, but sheer productivity. They surpass others not only in hits or quality contributions, but also in total contributions that include misses. Perseverance counts. When one experiences disappointing rejection, the adaptive response is to continue working and publishing as hard as ever. Something eventually is likely to be a big hit and will propel your career forward.
Reacting Adaptively
Negative feedback is directed at our manuscripts rather than ourselves. Most guidelines that editors send to reviewers make clear that ad hominem attacks are inappropriate, and conscientious editors interdict any such comments rather than send them on to authors. Nevertheless, it is hard not to take negative comments and decisions about one's manuscripts personally. The temptation is to counterattack, especially if there were inaccuracies in reviewers’ comments, as is likely to be the case.
The first thing to do is find a way to handle your anger and disappointment in a way that does not jeopardize what you are trying to accomplish. A mistake we see colleagues make that we believe arises from anger is engaging in pointless debates with editors that, at best, will win intellectual points but won't affect the outcome for the manuscript. The editors to engage are those whose letters encourage revising and resubmitting. When given the opportunity to revise and resubmit, the effective strategy is to deal with every point in the letter from the editor and each reviewer. Dealing with every point does not just mean rolling over – there may be a good reason not to implement a recommended revision. Most reviewers and editors are quite willing to accept, and probably even expect, noncompliance when it is justified by a coherent rationale. Some unsound recommendations may arise from a misreading of the manuscript by a reviewer. In such cases, the error can be pointed out in a nice way, and it is a good policy to change the relevant text to make it less likely that a reader will make the same mistake.
Getting a Head Start
When a manuscript or grant proposal is submitted, a candid self-assessment can be valuable in predicting liabilities that may prove problematic in the review process as well as in carrying out future research. The first author has a vivid memory of his advisor calling a meeting of his research group right after sending off a manuscript for a book. The book provided a theoretical view that encompassed work his advisor had carried out for the previous decade. The purpose of the meeting was to identify and discuss limitations in the theory described in the manuscript just sent off for publication. His advisor was disparaging a book that had not yet made it to the bookshelves, and encouraging others to join in.
Here the goal was to chart the course of future research, rather than to anticipate the need for revision because the book already had been accepted for publication. But for manuscripts and grant proposals, the futures of which are in doubt, there may well be an obvious next study to do that can provide a data-based response to a likely future critique. The most powerful weapon to have in debates with reviewers and colleagues is supportive data.
In summary, rewriting is an overlooked and underappreciated subject that can make the difference in whether a manuscript ultimately is accepted for publication and can make a difference in your impact on the field over your career.