The best psychological journals share relatively few attributes. Some contain only empirical articles; others contain no empirical articles. Some publish only short articles; others publish only long articles. Some encourage free submissions; others are by invitation only. One thing these journals have in common, however, is that the articles submitted to them are refereed: The articles go through a process of peer review before a decision is made as to their fate.
“Peer review” means that scholarly colleagues read your article and comment on it. They rarely make the final decision on whether the article should be published. Rather, that decision is made by an editor. But the comments of the peer reviewers play an important, often critical, role in assisting the editor in deciding on the fate of a manuscript.
You might ask, right at the start, why you should write for your referees. Isn't that “selling out”? It is selling out if you say things you don't mean or fail to say things you do mean. It is not selling out, however, if writing for your reviewers means that you take into account various points of view, you criticize others only if you can support your criticisms, and you recognize that referees will not believe something just because you said it. In other words, writing for referees is positive if you use this mindset as an opportunity to produce a better article.
As an editor, I have encountered authors who simply refuse to accept criticism. I had one author who had eight reviews of his article, seven of them extremely negative and one only lukewarm positive. Instead of concluding that the article was in need of serious improvement, the author concluded that the referees (and the editor) were either fools or malignly intentioned. On the one hand, it is indeed possible that one will have unique insights that others fail to appreciate (Sternberg & Davidson, Reference Sternberg and Davidson1982, 1983). Highly creative work is often underappreciated (Sternberg, 2018; Sternberg & Lubart, Reference Sternberg and Lubart1995). On the other hand, even if your work is highly creative, if you are getting seriously negative reviews, then you need to find a way to re-express your ideas so that they are more likely to gain widespread acceptance. Part of the creative process is taking a creative-leadership role – convincing others, including referees, of the value of your work (Sternberg, Reference Sternberg2003).
There are four possible ways in which peer review might be done. In theory, three of the four are fairly widely used. In double-blinded review, neither the referees nor the authors know who the others are. That is, the referees do not know who wrote the article they are reviewing, and the authors of articles do not know who the referees are. In single-blinded review, typically, the referees know who the authors are, but the authors do not know who the referees are. In theory, it would be possible for a single-blinded review to be done whereby the authors know who the referees are, but not vice-versa; however, I have never seen this form of single-blinded review used. Finally, in open (unblinded) review, both the authors and the referees know who each other are.
There are various arguments in favor of and opposed to each of the three commonly used forms of review. One could argue that referees should know who authors are, because that knowledge provides context for their evaluating the articles. One equally could argue that referees should not know who authors are, because such knowledge might bias the referees. Similarly, one could argue that authors should know who the referees are, so that referees cannot take advantage of anonymity to write reviews that trash the authors. One equally could argue that if referees believe their identities will be revealed to authors, they will be afraid to speak the truth. For example, a junior scholar might be reluctant to criticize a senior scholar who later might be asked to write a tenure or promotion letter for the junior scholar.
Whatever form of review is used, one way to improve your chances of getting your articles accepted is to write with your future referees in mind. In writing and then reading over your article, think the way your referees will think. Writing with referees in mind substantially, even drastically, improves chances of a favorable outcome for your article. So many of the rejections authors receive easily could have been averted had those authors written with their future referees in mind. This chapter contains a dozen tips on how to improve your chances of acceptance by writing with the referees in mind.
1 Proofread.
The simplest thing you can do is to proofread your article before you submit it. Referees rarely reject articles solely because they are chock full of typographical, grammatical, or word-processing errors. But such errors severely irritate most referees and are likely to put them into a rejecting frame of mind. Thus, the referees are likely to magnify the importance of other flaws and then to recommend that your article get the axe. Probably they are hoping you get it too. Some referees refuse to review articles that they view as not carefully proofread.
This punitive frame of mind is easily understandable. Referees are donating time as a professional service. Almost all of them would rather be doing something else. They donate the time because they want their articles refereed, and the only way the system will work is if they give in exchange for what they hope to get. But referees do not want to be proofreaders. If they did, they would have chosen proofreading as a career, or they would have become high school English teachers.
Their feeling in reading an article is likely to be that if the author did not want to bother to proofread his or her article, then they should not be obliged to review it. In my experience, lack of proofreading is the easiest thing to fix but one of the things that least often actually is fixed.
There are several reasons why many authors do not proofread their articles. None of the reasons is good.
First, authors may think their word processor did what they, the authors, should have done. The trouble is that the spell checker is of no help for spelling errors that constitute real words, as when principal is spelled as principle, or for when the word that is used is correctly spelled but simply inappropriate. Grammar checkers don not always function correctly either. For example, the previous sentence passed both my spell checker and my grammar checker with flying colors, despite its containing a blatant error.
Second, reading their own articles may make authors anxious. Sometimes they are appalled by what they read. But imagine how the referees must feel. They did not even write the thing, so it is worse for them.
Third, authors may believe that they are above proofreading - that this is a job for paid professionals. Not so. It is your job to proofread your work. Referees and editors want to concentrate on the substance of what you say, not on correcting your errors as a grammar school teacher might.
Fourth, authors may think that they do not have the time to proofread. But is it fair to expect referees and editors to do what authors do not want to bother themselves to do? Of course not. So proofread your article before submitting it! Look for spelling errors, grammar and capitalization errors, errors in usage, and for missing or out-of-order pages or tables or figures. Also check that the references match the citations in the text of the article. Proofreading also gives you one more chance to check the facts and the accuracy of what you wrote.
2 Ask one or more trusted colleagues to read the article the way a referee would.
Did you ever find an obvious typographical error in a published article and wonder how the author could have missed it? The answer is that the author missed it the same way you would have. After reading the darn thing several times, the author kept seeing the error without encoding it as an error.
In general, we experience a certain habituation to our own work, the same way others do. It is much like a dirty or messy house. What looks dirty or messy to an outsider has come to look clean and neat to the person who lives in the dump. It often takes an outsider to spot things that will be obvious to everyone except the person who wrote the article.
3 State clearly the problem you are addressing and then organize the article around the problem.
Sometimes it is hard to figure out exactly what problem the author of an article thought he or she was trying to solve. This phenomenon can occur because the author does not know what problem the article is supposed to solve, or because the nature of this problem was not clearly communicated. It is the author's responsibility to make clear early in the article what problem or problems the article tackles.
Once you have stated the problem, organize the article around it. Show why the problem is important - why it should matter to anyone beside you. If you do not know, why should a referee? In the literature review, use relevance to the problem tackled as the major basis for deciding what to cite. In the methods section, make clear how the methods address the problem. In the results section, tell the story of how the successive analyses help solve the problem that was originally posed. And then, in the discussion section, summarize what you did and discuss its implications.
A clearly focused, tightly organized article has a great advantage in the review process. You are helping the referee understand what your goals are and how you are trying to reach them. If you leave it to the referees to figure these things out, there is a good chance that the conclusions they come to will differ from your own.
4 Cite likely referees (who conceivably merit citation).
Suppose I consider myself one of the world's greatest experts on the effects of high-fat, sugary foods on amorous behavior. I view myself as one of the few people who really knows what happens subsequently when romantically involved couples go out on a date and share a large piece of cheesecake. I get an article to review on the topic and look forward to reading it. First, of course, I check the references to see which of my superlative articles on this topic have been cited. I discover that none of them is cited. I cannot believe it. How could anyone write about this topic without citing my work? I now start reading the article, but I already know it is a pretty poor piece with awful scholarship. All I need to do is find some reason to reject it, and I most likely will.
It is impossible to anticipate everyone who might referee an article. Nor can one cite every potential reviewer. But it is important to cite likely referees who have made a serious contribution to work in the field that the article covers. And if the editor has sent the article to a particular reviewer, the editor, at least, considers the individual to be one of the more active contributors to the field the article covers. Thus, this suggestion is not a cynical one: The likely referees are the same people who are likely to be the major contributors to the field.
It may sound like a silly game when one tries to cite potential referees. It's not. Rather, it is called “scholarship.” Scholarship means that you cite scholars whose work is relevant to yours, and perhaps on whose work your work draws. Editors are mostly likely to choose as referees precisely either those scholars whose work you cited or those whose work the editors believe you should have cited.
5 Make clear up front what the new and valuable contribution of your article is, and make sure you are right.
My conversations with journal editors suggest that the #1 reason for rejection of journal articles is lack of substance - there just is not enough new in the article to justify its publication in their journal. Reviewers, too, are on the lookout for articles that have little or nothing new to say.
It therefore behooves you to ask yourself what the new and valuable contribution of your article is, and to make clear near the beginning of the article what it is. Do not expect reviewers to figure it out for themselves. If you cannot figure it out, you cannot expect them to. If you cannot find such a contribution, either do more research or do more thinking before you submit the article.
6 Make clear how your work builds on that of others.
No one likes a credit hog - someone that makes a contribution and then acts as though no one else has ever had any idea of value in the area of work. Sharing credit goes beyond citing potential reviewers. It involves showing how your work builds on their work and the work of many others. Scholarship always requires drawing connections between what is new and what is old.
It also is important in citing references that you are up-to-date. Referees generally are not happy to see reference lists that would have been up-to-date a decade earlier. So check recent literature in the area in which you are working. In this way, you also are less likely to repeat what someone already has done.
Some authors may feel that, in setting out in a bold new direction, they really owe almost nothing to anybody else. But it is important to realize that, even when you oppose old ideas, you still are using those ideas as a base from which to map your campaign of opposition. And even when you move away from what others have done, had they not done what they had done, you would not have had their work to move away from. Thus it is important to show how you build on, not just how you go beyond, past work.
7 Check your data analyses and interpretations.
If your article makes a substantial contribution, there is a good chance that someone will ask for your data, which you are obliged to provide to him or her. This “someone” may be a referee, or someone who later reads the article. One of the more embarrassing events in the life of an academic is to have one's data analyses demolished. It is therefore important to check that you have used the correct forms of analysis and to ensure that you have transcribed the statistics correctly.
Also make sure that your interpretations are correct. For example, more stringent levels of significance do not indicate stronger effects, but rather, lesser likelihoods that a given result would have been obtained under the null hypothesis.
8 Make sure that your conclusions follow from your data.
High up on the list of annoyances to referees is the author whose claims go well beyond his or her data. Such authors are all too common. They may have a modest finding and then write about that finding as though they have changed the face of the Earth. If your conclusions go beyond your data, chances are that referees will notice this fact and lash out at the conclusions and at you.
There is a place in most articles for speculation that goes beyond the boundaries of the data. This place is in the discussion section of the article. But when you go beyond the boundaries of the data, make clear that you are speculating. Do not assume that referees or other readers will know that you know you are in a speculative state of mind.
9 Explain things clearly and in sufficient but not excessive detail.
It is annoying to a referee to be unable to judge some aspect of the work because that aspect is not clearly explained. The referee should not have to spend a lot of time trying to figure out what you really wanted to say. Make sure you say it and say it clearly. When you describe past work, describe it in sufficient detail so that someone unfamiliar with this work can nevertheless get the idea without having to go back to that work. When you explain what you did, explain it in sufficient detail so that someone else could replicate what you did precisely.
Novice article-writers often fall into one of two traps. These traps are of too much and too little detail.
The dissertation trap is to go into such great detail that readers can barely stay awake trying to sort the wheat from the chaff. Keep the mind-numbing details to dissertations rather than articles for published journals. If you need to go into considerable detail, write hierarchically so that the main ideas shine through. Do not force referees or other readers to get hopelessly mired in a bog of turgid writing from which they cannot escape.
The other trap is to assume that every reader has the detailed knowledge of a specialized area that the writer of the article has. Relatively few people will have your knowledge about your own area of specialization. Moreover, most readers, when confused, will not try to puzzle out what you really meant to say. Why should they bother? Chances are good that they will give up on your article. Many referees will simply say that it is impossible adequately to judge the article. Even if they keep reading, they are likely to regard you as muddle-headed or as the poor writer you have shown yourself to be.
10 Anticipate likely objections to your interpretations of the data.
Confirmation bias seems to be a matter of human nature. But just as you read your own data through your preferred lenses, so do your referees. And they may be wearing different lenses from yours. It is important to anticipate likely objections to your interpretations and to defuse these objections before your intellectual opponents ever set their eyes on your article.
Having an intellectual opponent read the article before you submit it is one way to anticipate objections. Another way is to put yourself in the place of the opposition, and ask yourself how your opponents might interpret the data in a way that would render the data most favorable to their own oppositional point of view. Then show, in your discussion section, why their interpretation cannot be correct. Or if it can be correct, at least acknowledge this fact. If the data are inconclusive, however, your article may be rejected for this reason. Thus, if you realize that your data are inconclusive, you might wish to wait to submit the article until you have done further research.
Editors often deliberately send articles for review to referees who are likely to be predisposed not to accept the conclusions of the article. Good editors take into account not only what reviewers say, but also who said it. But not all editors are good, and some treat the words of reviewers as divine revelations, no matter how idiotic what they say may be. Other bad editors seem never to read any of the articles submitted to them, so they have no choice but to accept what the reviewers say.
You have to hope that the editors with whom you deal do not simply take at face value everything the referees say. Fortunately, there are many competent editors. Unfortunately, the number of incompetent ones is not vanishingly small.
It occasionally happens that there is someone out there in the wilds who, you believe, has a personal animosity against you that goes beyond intellectual disagreement. When you submit an article, you can request that a particular individual not referee your article. If you do so, you should say why, and realize that it is up to the editor to decide whether he or she will honor your request. You also can suggest referees, although the editor is under no obligation whatsoever to accept any of your suggestions. Keep in mind that editors sometimes purposely seek out oppositional referees, so that in providing names of scholars opposed to your work, you may be providing editors with oppositional referees whom they then will seek out!
11 Make clear what the limitations of your work are.
Any good discussion section includes at least some frank acknowledgment of the limitations of the study. Did you use just a single methodology, or type of stimulus material? Did you use a restricted range of types of participants? Did you look at behavior only in one kind of situation, perhaps an artificially contrived one? Referees and all readers appreciate honesty. Most important, referees are less likely to mention limitations in reviews that you already have mentioned, unless the referees see them as fatal flaws.
Many authors nurse the hope that the referees will not notice the flaws. Such a hope is likely to be wishful thinking. Moreover, it is a misguided wish. Worse even than getting a paper rejected before publication is having to retract it after publication or having to resist an onslaught of published criticism for flaws that you should have noticed. Save yourself the trouble by acknowledging the flaws yourself.
12 Write for your likely referees and readers.
Expert article writers do not just write articles. They write for an audience. They decide on likely journals before they put pen to paper (or fingers to computer keys).
You can get a good idea of the types of articles a given journal publishes simply by reading the journal's mission statement (usually near the front or back of the journal) and by looking at recent past issues. But there is a more informal kind of knowledge you need to acquire either through your own experience or by profiting from the experience of others.
Many characteristics of journals go beyond mission statements. Some journals seem to emphasize methodological rigor above all else. One reads them and has the feeling that the study could be infinitely trivial but nevertheless published as long as it was methodologically sound. Other journals seem to emphasize articles that are interesting but flakey: The ideas are provocative but the evidence for them is slim. Still other journals seem more concerned about length than about anything else. These journals will not publish relatively longer articles, no matter how good those articles may be. One journal to which I have submitted seems to care more that the article is in standard APA journal-article format than about what is said in this or any other format. These kinds of characteristics tend to come and go as the editorships of journals change, but oftentimes the “culture” of a given journal endures beyond any single editorial board.
It thus behooves you to find out as much as you can about the kinds of issues that are important to the editor and referees of a given journal. You can save yourself a lot of lost time by seeking journals that publish the kind of article you have written and by avoiding journals that do not publish this kind of article.
One last point bears repeating: In writing for readers and referees, you should never sell yourself out. Do not say things merely to please readers, referees, or editors. Rather, write for others so as to improve your writing and your communication in general. And if you end up with negative reviews, and if you believe that you cannot make changes suggested by referees, don't make them. Explain to the editor why you chose not to make the suggested changes. If the editor does not agree with your rationale, you always can submit elsewhere.
The goal of writing for referees is not to compromise yourself, but rather, to write the best possible articles. I have been a psychological scientist for 43 years: I have never written anything I did not believe, nor failed to say what I thought for fear of offending readers or referees. At the same time, I always have written for my referees. I have viewed doing so as an opportunity to make my articles the best they can be.
Now that you have learned something about how to write your article with your referees in mind, do it. As for me, it is time for me to practice what I preach: I have to proofread my article!
This chapter addresses how to handle the reviews and action letter on your submitted paper. Our advice will doubtless overlap with that in other chapters in this volume. If you have followed advice in previous chapters on preparing your paper and making a compelling argument, your chances of publication are certainly increased. However, a large part of the process of academic publication involves rejection. Even the most esteemed researchers face rejection on a regular basis; indeed, they will often report that their most important papers were initially rejected, and sometimes they had to try two or three journals before gaining acceptance. In fact, probably the more exciting your paper is, making strong claims that generally upset the conventional wisdom, the more likely you are to be given a rough time in the reviewing process. After all, the reviewers believe the conventional wisdom; your new paper must convince them that they are wrong. A paper that is a landmark in retrospect might have originally suffered rejection for this very reason of going against the grain of the field (Sternberg, Reference Sternberg1998, Reference Sternberg and Sternberg2003).
Some writers decry the conservatism of the “scientific establishment,” but actually this skeptical attitude is what makes the scientific process great (e.g., Gawande, Reference Gawande2016; Lilienfeld & Ammirati, Reference Lilienfeld and Ammirati2012; Stemwedel, Reference Stemwedel2011). As Richard Feynman famously remarked, “Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself and you are the easiest person to fool” (Hutchings, Reference Hutchings1985, p. 343). And, to add Carl Sagan's (Reference Sagan1996, p. 304) thought,
At the heart of science is an essential balance between two seemingly contradictory attitudes – an openness to new ideas, no matter how bizarre or counterintuitive they may be, and the most ruthless skeptical scrutiny of all ideas, old and new. This is how deep truths are winnowed from deep nonsense.
Reviewers and editors provide the skeptical check on new claims, and they should.
To convince other scientists that the conventional wisdom is wrong, the case must be convincing and the evidence overpowering. The hope is that conservatism in the publication process will keep false positives from arising in science too often. However, recent events across science show that the pendulum toward “acceptance” may have swung too far, and more rigorous (hence conservative) scientific procedures are now being encouraged and are often required for publication. And thus rejection may be a more likely fate of many papers, at least at the best journals where you want to publish your work.
Categories of Rejection
But back to you and your paper. You (and probably your colleagues) had an idea, conducted research, and now are attempting to report your findings to the world. You have chosen a journal and submitted your manuscript. The research took months, maybe years, to complete, and you and your collaborators worked hard on the paper, trying to make it excellent. So, you are hopeful when you see a letter in your inbox from the action editor of the paper. Does he or she, and the reviewers, think your paper is as great as you do?
The answer is almost certainly no. In our experience, virtually no papers are accepted without any changes. (It has never happened to any of the three of us.) There are, roughly speaking, four categories of rejection, and our comments later in the chapter will be nuanced to taking into account the type of rejection.
The first category of rejection is what we will call desk rejection or triage, which means having your paper rejected by the editor without outside review. In fact, for some journals, like Psychological Science, you may be lucky just to get your paper reviewed. About 70 percent of submitted papers at that journal are desk-rejected, meaning that the two editors who first examined your paper decided it did not have a chance of publication at that journal. However, most psychology journals do not have such high triage rates, and most papers are sent out to review. Assuming your paper made it past the triage stage, there are three other types of rejection of varying severity.
The second type (after desk rejection) is what we can call the flat-out rejection: rejection with no opportunity for resubmission. Your precious paper has been sent out to review, but it has received two or more mostly negative reviews. In this case, you usually get a brief note from the editor saying it is rejected without possibility of resubmission. Flat-out rejection is the worst kind of rejection for any reviewed paper, because it means you have to start all over at another journal, assuming the reviewers have not convinced you that your paper suffers flaws so great that you should simply abandon the project. In our experience, authors rarely reach this conclusion.
A third category of rejection is revise-and-resubmit. In this case, the journal editor and reviewers see promise in your paper, but they also see flaws that need correcting. Thus, the editor writes that she/he is willing to reconsider your paper, but only after you have revised it to take into account most of the complaints of the reviewers. If you are lucky, the editor will specify which perceived problems are the most serious and must be addressed and fixed, if possible. This form of rejection means that you have work to do, but at least it is possible, perhaps likely, that you can fix the problems well enough for your paper to be accepted. We can find no data on the issue, but our guess from our personal experiences is that authors of most papers that are eventually accepted originally received a revise-and-resubmit form of rejection letter.
The best possible rejection letter, which is partly an acceptance letter, is for the editor to write, “Technically, I am rejecting your paper. However, I believe your paper has an excellent chance of acceptance. If you do X, Y and Z to your paper, I will accept it. I will not send your paper out for further review, but will handle it myself.” Although technically a rejection – that step ensures that the author will take the editor's points seriously – this form of rejection is actually the best possible outcome for your paper. In our experience, virtually no authors ever receive a letter on a first submission saying, “I am happy to accept your paper. Neither the reviewers nor I even have suggestions for you to consider.” Maybe some researchers get such letters, but we don't. And our guess is that you won't, either.
Reacting to Reviews
This section, like the others, is based on our experiences and listening to our colleagues. A small percentage of researchers may deal with reviews in a calm, mature manner: “Oh, this is what reviewers thought of my paper, how interesting! I see I did a poor job of expressing myself, of making my contribution clear. And, sure enough, the data analytic technique Reviewer 2 suggests is much better than the one I chose. I guess I will need to rethink this whole project. I am lucky to have gotten such astute reviewers.”
Most researchers cannot respond immediately in this mature fashion, although after a couple days of reflection on the critiques, this level of appreciation can often emerge. A widespread reaction on first receiving an action letter is to skim the letter and reviews to get some notion of how bad the news is, then walk away from the message (or, in the old days, chuck the letter and reviews into a desk drawer). There the material sits while the author occasionally reflects on the situation and musters the courage for a close look. If you find yourself in this latter category, and you may, we encourage you not to let this first phase last too long. Get back to the project after a few hours; discuss the reviews with your co-authors (if any) or with close colleagues, if there are no collaborators. Here are stages you may go through along your journey to resubmission.
Five Stages of Dealing with Rejection Letters
We find it useful to adopt Elisabeth Kubler-Ross's (1969) five stages of grief in thinking about how we deal with manuscript rejections. Of course, her original theory was about stages of grief upon learning of a terminal illness. Receiving a rejection letter on a paper you thought worthy is certainly nothing so dismal as this, but it can still provide a sharp wound, especially for one of your first papers.
Denial. You click on the action editor's letter, expecting to be greeted by an acceptance. Instead, you get one of the two stages of rejection we called flat-out rejection or revise-and-resubmit. The first one is truly depressing, but often the second one can seem nearly as bad: “The reviewers want me to reframe the paper, consider a new theoretical framework we did not use, use a new data analytic strategy, cite their work more, and suggest that we will reach different conclusions. Surely, they must be kidding! My paper cannot really need that much work. Is it really that bad?” That's denial at work, and it can lead to:
Anger. “No, damn it, it's not that bad. I have been thinking about this problem and reading about it as much or more than the reviewers. My collaborators and I know what we are doing. We are going to fight this decision. Reviewer B, in particular, is off his or her rocker. It's probably Snerdly. He always has it in for me. Too cowardly to sign his review, but I bet it is him.” Anger feels good. Enjoy the catharsis. Go have a beer with your collaborators and beat up on the reviewers for a while. But don't write the editor and ask for reconsideration because Reviewer 2 is an idiot (or for any reason, really). Appealing an editorial decision is not recommended; your editor made the best decision she or he could on the basis of the reviews, his or her own reading, and also some bottom-line recommendations (accept, major revision, minor revision) and checklists (how significant is the research question?) asked of the reviewers that you as author cannot see. Also, reviewers usually have the option of adding private comments to the editor. Anyway, after you and your collaborators get angry and vent for a bit, it's time to get serious about practical next steps, which leads to:
Bargaining. “Well, on reflection, Reviewers A and C, and even B, make some good points. I have made a list of eight major points suggested by the reviewers and endorsed by the editor. Maybe I could do the six I am coming to agree with, but make a case that the other two don't matter. I'll talk to my co-authors on the paper.” This may lead to:
Depression. “Oh, my colleagues and I disagree on which ones of the eight points we should attend to. Lisa thinks the two points I omitted are really among the most important ones. Steve seems to agree. Of course, as first author, I need to do most of the work, so that's easy for them to say. Still, they do make good points. The reviewers really were thoughtful, even wise. How could I have not seen these things when I was working on the paper?” This thinking leads to:
Acceptance. “Well, all right. I still disagree with the reviewers on some minor things, but let's face it, by and large all eight points need attention. I will just have to buckle down and work hard on the paper. The upside is that if I do what the reviewers suggest, including several analyses that I would have never thought of myself, the paper will be much better, is quite likely to be accepted, and I will look a lot smarter than I apparently am.”
Why Are Manuscripts Rejected?
Many reasons exist for rejection, and often the reasons are particular to your paper. Nonetheless, common reasons for rejection exist; the three of us have many years of editorial experience, which means we have written numerous rejection letters. Here is our list of common reasons for rejection. The list is admittedly idiosyncratic, but try to avoid them in a submission or resubmission.
1 “This manuscript does not make a significant contribution to the literature.”
This one is a killer, because it is based on the reviewer's subjective opinion, having just read your paper. But when you are writing your manuscript, you will want to keep this potential criticism in mind. Do make clear your research question. Do not write a manuscript that is narrow in scope (e.g., designed to clean up some flaws in Experiment 2a of some previously published paper). Tell the reader what interesting knowledge they will possess after they have read about your studies, and highlight the importance of this knowledge. However, do not overdo it. Don't say “This manuscript makes a profound contribution to the literature for the following reasons.” Avoid calling your own work “important,” claiming to show something “for the first time” or using other similar terms. You do want to leave the impression with the reviewer that the work is important, but you must write so that they will draw that inference, not by baldly telling them. On the other hand, you may need to ask yourself, “What is the realistic contribution of this paper to the literature? Am I aiming at too lofty a journal?” Journals differ in their standards for what is a “significant contribution” and judging where your paper will fit is important. Of course, you should start at the best outlet that you think is appropriate. You can always work your way down the hierarchy, if you need to. But try to be realistic in the first place. Not all your work belongs in high-profile journals, and that doesn't mean it isn't worth publishing.
2 “This manuscript is boring” or “This manuscript is incomprehensible and filled with jargon.”
These are two of the more deadly comments from reviewers. Assuming your research and findings are not criticized and that you believe they are interesting, then the problem usually lies with your writing. You need to tell a compelling story. You present an issue or puzzle in the introduction; you specify the means for addressing or solving it in the method; your results provide the outcome; and the discussion finishes off the rest of the story. Skillful writers can bring a reader through a paper like a good story, making him/her eager to know how the research came out. You need to write with verve, but without going overboard. Find researchers whose papers you admire and study them: What made their papers great? It's also worth considering that this kind of criticism is much more likely at a general journal (e.g., Psychological Science) than a specialty journal (e.g., Journal of Experimental Psychology: Learning, Memory, and Cognition). The broader the journal's audience, the greater the requirement for broad appeal. A more specialized journal will not have the same broad-appeal requirements, although needless jargon (and numerous acronyms) should still be avoided, and contact with a big-picture research question is still important.
3 “The authors missed some of the literature and they should consider the recent papers of James Madison on this topic.”
This kind of comment often (but not always) comes from James Madison himself, hiding behind the anonymous review process. It can be irritating. On the other hand, in our experience, the reviewer is often right. Keep in mind that the reason a reviewer gets your paper to review is that she or he is an expert in the topic area or a closely related one. Hence, they also know the literature well, and they may have some knowledge that you do not possess. This is especially true if you have just begun research on this topic. So, consider such comments carefully.
4 “The authors have recently published several other papers more or less like this one. This contribution is merely incremental and should be bundled with others to create a stronger package.”
This is another killer, especially if the reviewer is right. The reviewer is basically accusing the author of piecemeal publication: the practice of scattering research using the same paradigm across many journals when it would have been more appropriate to package the research in two or three papers involving multiple studies. Our best advice is never to read this complaint in the first place by creating a strong package of multiple studies (when appropriate). If you get this complaint, you should consider adding other studies to make your manuscript have a contribution that will be perceived as more important.
5 “This paper is flawed because of X, Y, and Z.”
X, Y, and Z could pertain to some specific method or confounding or analysis that is unique to your study, but we consider a few general problems here. In the last few years, and certainly since the Simmons, Nelson, and Simonsohn (2011) paper on “false-positive psychology,” several questionable practices have come under attack. One is conducting studies with too few observations to be powerful enough to detect effects (judged a priori), and then reporting an effect in a single study with a p-value of .039, that is, barely below .05. In this case, reviewers may suspect “p-hacking,” meaning that the author exploited “researcher degrees of freedom” (Simmons et al., Reference Simmons, Nelson and Simonsohn2011) such as eliminating data from some subjects, testing multiple hypotheses in the data and reporting only the significant ones, increasing the sample size until just enough subjects have been tested to get below the .05 level of significance, among others. Most journals today require transparency in the form of at least open publication of the underlying data on which your manuscript is based. And it is no longer enough to justify sample size to say, “we used the number of subjects customary for research of this type.” After all, the problem that Simmons et al. (Reference Simmons, Nelson and Simonsohn2011) pointed out is that the extant literature is underpowered; saying you used the same number of subjects or observations simply compounds the problem that already existed. Instead, most journals want some type of power analysis based on past findings, although such analyses themselves involve making assumptions (or guesswork). In short, no amount of framing and novelty can overcome the concern that the findings themselves are not solid. Do your best to convince the reader that your work is solid and replicable, for example, by replicating it yourself with many data points per observation. If you can't do so, it may not be time for publication.
Of course, we have provided only a sampling of common criticisms of papers here. You will get specific remarks based on your paper. We encourage you to consider each carefully and thoughtfully, even when (or especially when) doing so is painful. Most editors would not expect you to change your manuscript in response to every quibble that every reviewer has, but you should give them honest consideration and address each of them in your Response to Reviewers (see below). And yes, reviewers can and do make mistakes and (say) ask for information you already provided. We do not mean to suggest that they are infallible. We move on now to the next stage in your process.
What to Do after Rejection? The Process of Resubmission
As we noted above, several types of rejections exist. We discuss briefly how to approach each type.
If your paper was triaged and you got little feedback, it is hard to know what to do next. Journals with high triage rates (Psychological Science, Science, Nature) often value “perceived novelty” and your paper perhaps did not pass this bar. Some question whether this is even a reasonable bar, and they argue that the need for novelty is what is crashing the scientific process. Researchers are making extravagant claims based on less than compelling evidence, hoping for an acceptance on the basis of novelty rather than solid science. Still, in the case of a triage decision, you should revise your paper, taking whatever morsel of feedback you received, and try a different journal, probably one that is more specialized to your field of work.
If your paper was flat-out rejected but reviewed, you will be submitting to a new journal (or possibly shelving the work if you become convinced it is a dead end). You should never resubmit the same paper, not changing a word or changing only a few words. After all, some of the leading experts in the field have just given you their best advice, so you would be foolish not to use it in revising your paper. More pragmatically, the editor of the new journal you submit to may send the paper to one of the previous reviewers. This happens all the time, because most topics have a small coterie of well-known researchers; you do not want them to see you did not bother to incorporate any suggestions of the two or three experts who volunteered their time to review your prior submission. We know from experience that this situation is very frustrating (from a reviewer's point of view), and you can be sure the reviewer will relay the fact to the editor (i.e., that you failed to consider reviews from a prior journal before submitting to the new journal). Even if the initial reviews misinterpreted part of your paper, you can be sure that there are some morsels of insight in any set of reviews. Besides, if a reviewer misses the point of your paper, it is worth considering that at least part of the problem lies in your writing.
If you do submit a previously rejected manuscript to a new journal, you have the option of telling the new editor about your prior rejection. People disagree on this issue. On the side of not informing the editor about your rejection, there will be no stigma attached to the paper and the editor will not be thinking “If journal X did not take this paper, why should I? My journal is as good as that one.” Of course, the editor might find out it is a resubmission from one of the reviewers.
The other avenue is to be upfront with the new editor about your rejection. Say, in your letter of submission,
I am submitting this paper to your journal. A former version of this manuscript was rejected from Journal X, but my co-authors and I have carefully reformulated our paper based on the previous reviews. We feel it is now a substantially improved paper. We are attaching the previous action letter and reviews for you, should you want them, along with responses to the reviewers’ and editor's points.
If you follow this procedure, the new editor may be impressed. That is especially so if the initial reviews were reasonably positive (some may have recommended publication), and you have dealt satisfactorily with the points raised. We (authors of this chapter) have experienced almost immediate acceptance of manuscripts when we followed this procedure, sometimes without the editor seeking further outside review. Still, this procedure lets the new editor know that the paper has been rejected by another journal, which can be a problem. Then again, if you received reasonably positive reviews but the editor decided to reject it because it was not “novel,” it is probably worthwhile in most cases to notify the editor about your prior rejection. She or he is likely to find out during the review process, anyway.
The next class of rejection to consider is the revise-and-resubmit decision. In some ways this is the trickiest, because the editor may have told you what you have to do and you may disagree with some points. One step to take is to decide, with your co-authors, on a list of items you must do (say, they were endorsed by two reviewers and the editor), things that you cannot do (tell the editor why – and if the answer boils down to “because I would have to do a lot of work,” then just get busy and do it), and points that you believe are misguided or plainly wrong. However, you need to defend these last points in your response.
Perhaps our best piece of advice in this section is for you to write a careful, compelling Response to Reviewers to accompany your resubmission. Do not write a short note saying, “we did all the things we thought were worth doing” or even “we did everything.” Instead, reprint the action letter and reviews in their entirety, with comments interspersed (perhaps in another font color) to say how you have responded to the comments/concerns. Here, we usually say what we did and then quote from the new section of the manuscript. Once the editor has read the comprehensive response to reviewers, he/she should have been reminded of all the prior concerns and have a good sense of how the manuscript has changed. He/she can then read the entire manuscript with these points in mind. Most of your responses should say things like “we appreciate the reviewer pointing out this ambiguity, and we have clarified the issue by adding the following text …” On occasion you can disagree with a reviewer and explain why, but this option should be used sparingly. Remember that the reviewers will likely be reading your Response to Reviewers, and if you push back on too many points, they are unlikely to come back with a recommendation to publish. That said, if you fundamentally disagree with a reviewer's point and think the manuscript will be substantially worse if you follow their suggestion, don't do it.
Last, consider the rejection that is nearly an acceptance, the technical rejection pending a few changes. This is a great letter to get, but do not assume you are home free. We advise taking the changes seriously and (as before) writing a formal response to the reviews. In short, act as though you got a revise-and-resubmit letter, so the editor will be persuaded. Better to be too careful than not careful enough.
Reflections on the Peer Review System
The literature on the psychology and sociology of science (as well as in individual fields) is replete with attacks on the peer review system – that it is shallow, unfair, and biased, and lets reviewers snipe behind the barrier of anonymity. It is far too conservative and thus “great papers” are routinely rejected (e.g., Csiszar, Reference Csiszar2016; Kelly, Sadeghieh, & Adeli, Reference Kelly, Sadeghieh and Adeli2014; Smith, Reference Smith2006).
Yes, bad and unfair decisions can be made, and they are made. Nonetheless, we defend the peer review system as better than any possible alternative (Roediger, Reference Roediger, Jackson and Rushton1987). The editors of journals are incredibly busy people, as most of them are trying to maintain their own research, teaching, mentoring, etc., as well as to edit a journal. The reviewers give their time for free to examine papers and provide opinions and advice. No one has the goals of being unfair or blocking scientific progress in mind, even if the system occasionally does that.
The whole issue of the “replication crisis” in psychology points to our traditional system of publication in many fields being, if anything, too liberal, with the publication of numerous “findings” that cannot be replicated.Footnote 1 The standards for publication in the field are changing, and reviewers’ standards will be changing, too. The entire field of psychology is upping its game. Thus, the standards for publication will become even more stringent, rejection more frequent, and the angst of rejected authors perhaps greater. Still, we see the peer review system as the best hope for our field and for science in general.
One last thought. We have discussed the case of being rejected by one journal. Suppose you receive a flat-out rejection by one journal, and then you provide a good-faith effort and try a second, perhaps less impactful, journal, and then you get rejected again by a new editor and a new set of reviewers. Then it might be time for you to take a hard look at your project and ask yourself if it might not be time to move on to something else. But if you really believe in the importance of your findings and your project, persevere! We suspect that the trait of perseverance is one key to success in academia.
Conclusion
Dealing with rejection is never easy, especially early in one's career. Still, if you are to be a researcher and academic, you had better get used to it; rejection is part of the publication process. We hope the strategies presented in our chapter should provide some tips for turning a rejection into a publication.
A few years ago the first author had a weekly meeting with a graduate student who was among his best students ever. She had arrived with a Master's degree in hand and, in short order, she wrote a review paper, an empirical article that was about to be sent off for possible publication, and defended her dissertation prospectus. The agenda for our meeting was to consider predictions made by alternative models that she would evaluate for her dissertation.
Meetings with productive graduate students are the meetings faculty most look forward to, and this was no exception. Before talking about her proposed dissertation, however, the student stated that she had heard from a journal. It took a second to remember that she had sent her review paper to a journal about four months ago. This was the first article she had written for a psychology-related journal, and would be her first sole-authored publication if accepted.
Unfortunately, the news was that her article had been rejected. She handed over the letter from the editor and the sole review. The editor thanked her for sending the manuscript to the journal, and wished her well in revising the article for possible publication elsewhere. The review began in a begrudging, complimentary fashion, noting that the writing was “pretty good” and that the conceptual views reviewed in the manuscript were “interesting.” In the course of my university career, the first author has read several thousand reviews, either as part of the review process for someone else's work or for my own. Although one can try to read between the lines too much, one could sense tension, as though the reviewer was angry about something.
What could the problem have been? The article wasn't overly long – something that does tax reviewers who have many demands on their time. The article wasn't critical in the sense of challenging or attacking someone's work or cherished position, so it didn't seem possible that she had offended someone. Although the point of the article was to bring material from other disciplines into a discipline that had not yet made these connections, the article seemed to be within the realm of what the journal purported to publish. So she shouldn't have been wasting the reviewer's time with a manuscript that the editor should never have bothered the reviewer with because it was a bad fit to the journal. The writing really was good, in terms of clarity, so the reviewer should not have become frustrated trying to figure out what the student intended to communicate.
Then came the sentences that made clear what the problem was. The reviewer commented on a conceptual problem, namely, that the manuscript had not included a view the reviewer believed to be relevant to the discussion. The reviewer pointed to a book that the student had not cited. A quick look at the editorial board of the journal revealed that the author of the book we had omitted mentioning was on the editorial board and was the likely reviewer of the student's manuscript.
Although one can't know what the outcome might have been otherwise, the student certainly hurt her chances by not citing potentially relevant work done by the reviewer. What made this lesson even more painful was the senior author's memory of sending off his first manuscript from graduate school. After a number of revisions, his advisor agreed that the manuscript was ready to be sent off and asked to see the guidelines to authors and editorial board composition of the journal the manuscript would be sent to. He reviewed the composition of the editorial board, looking for individuals whose work should have been cited but had not been. Had the senior author done this same exercise with his graduate student, the outcome might have been more positive. To add insult to injury, the topic of the senior author's dissertation (Wagner, Reference Wagner1987), and other work (Sternberg et al., Reference Sternberg, Forsythe, Hedlund, Horvath, Snook, Williams, Wagner and Grigorenko2000; Sternberg & Wagner, Reference Sternberg and Wagner1986; Wagner & Sternberg, Reference Wagner and Sternberg1985, Reference Wagner and Sternberg1987), was the importance of tacit knowledge (i.e., practical know-how that usually is not taught formally) to professional achievement in domains including academic psychology. The dissertation established that tacit knowledge about the ins and outs of publishing was related to outcome criteria, including number of publications and citation count. Had he learned nothing from his dissertation and from the wisdom of his helpful advisor?
We don't purport to have any special wisdom about publishing, which won't be much of a surprise, given the experience we just related about the graduate student's first psychology manuscript. It also is the case that excellent guidance on scientific writing exists, including the present book and others (e.g., Sternberg & Sternberg, Reference Sternberg and Sternberg2016). We do think some practical information about publishing, such as to avoid slighting an individual by not citing his or her work when you should have, rarely gets discussed. Perhaps the best example of such an omission is the topic of rewriting. Virtually no manuscript is accepted in original form. But in our experience, skillful rewriting – revising one's manuscript in response to feedback from editors, reviewers, colleagues, or even your own initial reading – is not fully appreciated.
Having had her first and certainly not her last taste of disappointment, it was time for the graduate student to roll up her sleeves and get on with the business of rewriting for publication. First on the agenda was to feature the missing conceptual view pointed out by the reviewer and look more exhaustively for others. A researcher who was invited to be a reviewer by the editor of the journal who had just rejected her manuscript could well be invited to be a reviewer by the editor of the next journal she tried.
Being Miserly with Words but not with Citations
Most drafts of manuscripts need a good bit of pruning. When writing for publication, economy counts. Every word must serve a purpose. Page limitations preclude irrelevant material. Although it is vital to be economical in choice of words and content, one nevertheless should be generous in attempting to relate one's own ideas to those of others.
When the first author's advisor scanned the editorial board to see if we omitted relevant citations of potential reviewers, he thought it to be a rather cynical enterprise. After all, if someone's work had not surfaced when writing the introduction or discussion, why stick it in now – besides the self-serving answer of wanting to improve the chances of having the manuscript accepted for publication? This reaction reflected two assumptions that are naïve if not erroneous.
The first assumption was that the identity of all related work would be known in the routine course of carrying out the research. Given the sheer volume of what is published annually, it is likely that an author will be unaware of relevant work of at least someone. Fortunately, having the bad luck of having one's work reviewed by that someone is a bit rarer. In the case of the graduate student, her area was an active one, with a large number of books and relevant articles. In her defense, the book she missed was not particularly recent or notable. But it was an omission. Most writers tend to overvalue the contribution of their own ideas and writing, and undervalue those of others. This bias, combined with the practical difficulty of keeping abreast of exploding literatures, makes it probable that one will omit potentially relevant work. As a beginning researcher and author, it is easy to get overwhelmed by the amount of potentially relevant literature. It can be difficult to determine how much searching for relevant literature is enough. Scanning the editorial boards of all of the journals in an area, as well as study section rosters for grant funding agencies, is one way to identify possible omissions of important sources because these individuals tend to be among the major contributors to their areas of expertise.
The second erroneous assumption was that reviewers would approach, if not actually achieve, complete objectivity in their appraisal of one's manuscript. By its very nature, writing a review represents a combination of subjective and objective elements. When a reviewer reads a manuscript that is perceived to be closely related to a contribution of someone else, and that contribution is not cited, it is hard for the reviewer not to infer that the author is a little arrogant, ignorant, or careless. What we know about human judgment and decision-making suggests it is likely that evaluation of a manuscript is colored by such inferences.
In retrospect, our changing view about the value of generously citing the work of others was predicted by results from the first author's dissertation (Wagner, Reference Wagner1987). Academic psychologists, graduate students, and undergraduate students were asked to rate the quality of various research-related strategies on two scales. An ideal scale represented the value of the strategy ideally, without regard to practical reality. An actual scale represented the practical value of the various strategies, given the world of academic psychology as it exists. The interesting finding was that discrepancies between ideal and actual ratings decreased with experience, being largest for undergraduates, next largest for graduate students, and virtually nil for experienced academic psychologists. Whether the convergence of actual and ideal noted for experienced academic psychologists represented (a) slippage in their ideals over time, (b) a selection factor, in that individuals whose ideals were closer to realities were more likely to select and stay in a profession, or (c) both, wasn't clear and still isn't. We now believe that generously citing the work of others is an important correction for a natural egocentrism that is common among writers, and fortuitously reduces the probability of offending reviewers who feel slighted by our omitting reference to their work.
Soliciting Feedback Actively
Part of the process of writing for publication is receiving feedback from reviewers and editors. But the amount of feedback obtained can be variable, and, in general, is paltry relative to the feedback available to those who seek it actively. Seeking feedback actively and incorporating it can improve your research and writing.
With the exception of the occasional sadistic reviewer, most of us find it more pleasant and easier to give positive feedback than negative. Consequently, negative feedback may be left unstated; however, with some diligence and tact, it often is possible to obtain negative feedback. Most colleagues and mentors are quite willing to be a source of additional feedback. Individuals are flattered that you value their judgment enough to ask them to provide comments on a manuscript. By explicitly asking for suggestions about how to make the manuscript better, you provide individuals with a comfortable way to convey negatives in a constructive way.
Another means of actively soliciting feedback is to seek out opportunities to give talks about the work and to seek out researchers in the audience who do related work for discussion after the talk. The obvious consequence of actively seeking feedback is an increase in the amount of feedback you get. Because not all feedback is valid, an increase in the amount of feedback you obtain means an increase in valid as well as invalid feedback.
Separating the Wheat from the Chaff
Feedback needs to be evaluated rather than accepted at face value. Reviews typically contain some good advice and some not-so-good advice. Given the amount of time you have spent on your project, and the comparably minimal time a reviewer spends, it is not surprising that some feedback will be unwise or simply wrong.
Beyond this, it is important to determine, as best one can, whether the editor and/or reviewers want a successful outcome for you, or whether they are convinced that your manuscript is a poor fit to the journal and their views are not likely to change regardless of revision. Ideally, but not always, the editor's intentions will be clearly spelled out in the action letter that accompanies the reviews. If the overall impression of the editor or reviewer is positive, he or she may try to provide recommendations and comments for revision in a genuine attempt to improve the final product.
On the other hand, the editor or reviewer may believe that the topic of the manuscript simply is not interesting enough to merit publication, given the existing literature. Many editors and reviewers are not comfortable condemning a manuscript outright for something that cannot be fixed, and, rather than doing so, they provide a laundry list of more specific comments and recommendations. They attempt to justify the global decision with specific comments and recommendations that really are chaff, as in the strips of metal foil released in the atmosphere from planes to inhibit radar, as opposed to feedback that, if implemented, would result in acceptance of the manuscript. Though this roundabout method of being rejected may seem discouraging, it is important to take the comments for what they are worth and use every opportunity to better your manuscript for the next time you attempt to publish it. As mentioned before, reviewers may be involved in multiple journal boards and could potentially see your paper more than once.
Anticipate Unintended Inferences
What does it mean to know a word? Does knowing a word mean that you can define it? Does it mean that you can use it in a sentence? Does it mean that you have relational knowledge about the word such as knowing antonyms and synonyms? Does it mean that you can choose the correct morphologically related variant of the word (e.g., run, ran, running) for a given context?
Given that humans use words to think as well as communicate, a study that adds to our knowledge about what it means to know a word would seem to be of profound importance and worthy of publication in one of the most prestigious and widely read journals. The first author and another graduate student thought they had such a study. There are existing measures of definitional knowledge, usage, relational knowledge, and morphological knowledge. Performance on such measures is correlated but not completely so, suggesting that these represent related yet distinct aspects of vocabulary knowledge. But we recognized a limitation in these measures that apparently had been overlooked. The measures had different vocabulary words on them. Differential knowledge of the specific vocabulary words that appeared on the different tasks could reduce the correlations among the tasks artificially. We had what we thought was the brilliant insight to do a study in which participants were asked questions that had them demonstrate their definitional knowledge, usage, relational knowledge, and morphological knowledge for the same vocabulary words, thus eliminating any effect of differential knowledge of specific vocabulary words on the correlations among tasks. We used a state-of-the-art model to show that each of the purportedly different kinds of tasks actually measured the identical underlying source of vocabulary knowledge. For the task of measuring vocabulary knowledge, the tasks were interchangeable because they all were measuring the same thing.
We sent our manuscript off to a very prestigious and widely read science journal and our manuscript was rejected. It was reviewed by researchers who studied morphological knowledge and who appear to have made an inference from our manuscript that we had not intended. The inference was that they were studying something that really did not exist. This happened on several occasions. We kept rewriting and submitting until it finally was accepted in a good journal but one less prestigious and more specialized than we believe was warranted given the contribution (Spencer et al., Reference Spencer, Muse, Wagner, Foorman, Petscher, Schatschneider, Tighe and Bishop2015). To get our manuscript published, we reconceptualized our findings in a way that made them more palatable to researchers who made a living studying one of these kinds of vocabulary knowledge and also helped eliminate unintended inferences. We suggested that the different kinds of vocabulary knowledge represented important facets of what it means to know a word, and that when one walks around the world and learns vocabulary words from context, one learns something about each of the different facets. Had we thought more carefully about unintended inferences, especially ones some reviewers would find to be highly threatening to their work, we might have been able to publish our manuscript in one of the earlier outlets we had tried.
Embrace Your Limitations
According to some researchers, there is a phenomenon called specific reading comprehension disability. It refers to people who struggle to comprehend what they read despite being able to read the words on the page. We were skeptical that this phenomenon existed, believing that individuals with so-called specific reading comprehension disability really had a general deficit in language that was not specific to reading comprehension. We had access to an ideal dataset to test whether the phenomenon was real, a dataset made up of hundreds of thousands of students who happened to attend schools that served students living in poverty. We first counted up the number of students who were poor at reading comprehension. Then we kept only students who were both poor at reading comprehension and adequate at reading the words on the page, which was about half the number we started with. Finally, we kept only students who were poor at reading comprehension, adequate at reading the words on the page, and also adequate in oral language as indicated by their performance on a vocabulary measure. Less than 0.1 percent of the original sample of students who were poor at reading comprehension remained.
We thought this was strong evidence that our scepticism was warranted. The reading comprehension problem was not specific to reading at all but rather a manifestation of a general language deficit. We presented a talk on this work at a conference and the audience included researchers who championed the phenomenon of specific reading comprehension disability. In looking around the room while giving the talk, icy stares and frowns from these researchers were noticed. During the discussion after the talk, none of them raised their hands to comment until the very end. Finally, one of the researchers made the comment that an important limitation to our study may have been our sample. Children living in poverty were overrepresented in our sample and general language problems are more common in children living in poverty. Maybe that is why we failed to find students whose comprehension problems were relatively specific to reading. We had to acknowledge that as a potentially important limitation in our study, and suggested that we need to do additional studies on other samples to address this limitation.
We wrote up the study, and featured this limitation prominently in a limitations section at the end of the manuscript (Spencer, Quinn, & Wagner, Reference Spencer, Quinn and Wagner2014). In a subsequent series of studies, we discovered that we were wrong about our earlier skepticism about the existence of individuals with specific reading comprehension disability (Spencer & Wagner, Reference Spencer and Wagner2016, 2018). Although individuals who are poor at reading comprehension despite being adequate at decoding do tend to have oral language problems, their average deficit in reading comprehension is three times greater than the average deficit in oral language. Consequently, the oral language problem is not a sufficient explanation of the reading– comprehension problem and something else must be contributing to the poor performance in reading comprehension. We missed this fact in our original study because we were just counting individuals instead of taking into account how severe their problems were.
For publishing our original manuscript, we were fortunate to have had a limitation pointed out to us before we wrote up our manuscript and we were able to acknowledge it. Often, additional limitations one hadn't thought of will come up as part of the review process and should be acknowledged. Although one might be tempted to hide limitations one is aware of rather than acknowledging them in a manuscript before sending it off for review, it is much better to acknowledge them than to have them discovered by reviewers. When you acknowledge even important limitations, you convey the message that your contribution is still important or you would not have submitted it for possible publication. When a reviewer discovers a limitation you haven't acknowledged, it can call into question the importance of your manuscript.
Perseverance
Despite the fact that publishing is one of the two principal ways that psychologists influence their respective fields (the other is influencing students who subsequently publish), why do so many reduce their rate of publishing – if not stop entirely – when tenure or promotion are no longer at stake? The answer in part may be the low rate of return for their effort, as well as the unpleasantness of the process of peer review. Most manuscripts are rejected for publication by the selective, valued journals in a given field. Rejection rates for APA journals, for example, typically are 80 percent or higher. Of the smaller proportion of manuscripts that are eventually published, few will have an impact, as measured by citation count – the frequency with which the article is cited by someone else who publishes an article, chapter, or book. In fact, the citation counts of the majority of scientists in any field including psychology are near zero. This means that in a typical year, someone who succeeds in publishing will not make reference to any publication of the typical researcher.
Nevertheless, a small proportion of individuals will have a profound effect on a field. The distribution of citation counts is highly skewed. Although the majority of scientists have citation rates near zero, a minority have citation rates in excess of 100 citations. In most fields that have been studied, the top 10 percent of most prolific contributors account for roughly half of everything produced, and the top 20 percent account for roughly 80 percent of everything produced (Simonton, Reference Simonton and Ericsson1996).
Dennis (Reference 164Dennis1955), for example, studied the distribution of productivity in seven fields: eighteenth-century American secular music; books in Library of Congress as of 1942; publications in the fields of gerontology and geriatrics; publications in the field of North American geology, 1929–1939; publications on infantile paralysis to 1944; entries in Chemical Abstracts, 1937–1947; and publications on linguistics, 1939–1947. The combined results were that the top 10 percent of producers contributed 50 percent of the work. For publications in the field of psychology, the top 10 percent of producers contributed 40 percent of the work, while the bottom 50 percent contributed only 15 percent (Dennis, Reference Dennis1954).
What is it that differentiates the top producers from the rest of the pack? One possibility is that they simply have more success publishing their work. In other words, their “hit rate” is higher because the quality of what they produce is higher. Early studies that examined ratios of contribution quality (i.e., those that had a measurable impact) to quantity (total number of contributions) suggested that this might indeed be the case (Dennis, Reference Dennis1966). However, these analyses were flawed in comparing quality of contributions from one database to total number of contributions in another (Simonton, Reference Simonton1988). When more recent studies compared quality and quantity of contributions for the same producers, a different pattern of results emerged (Simonton, Reference Simonton1985). This pattern of results is well described by the equal-odds rule, which holds that there is a relatively constant proportion of quality to quantity of contributions across individual researchers and within individual researchers across different time periods in their career. What differentiates top producers from the rest of the pack is not a higher hit rate, but sheer productivity. They surpass others not only in hits or quality contributions, but also in total contributions that include misses. Perseverance counts. When one experiences disappointing rejection, the adaptive response is to continue working and publishing as hard as ever. Something eventually is likely to be a big hit and will propel your career forward.
Reacting Adaptively
Negative feedback is directed at our manuscripts rather than ourselves. Most guidelines that editors send to reviewers make clear that ad hominem attacks are inappropriate, and conscientious editors interdict any such comments rather than send them on to authors. Nevertheless, it is hard not to take negative comments and decisions about one's manuscripts personally. The temptation is to counterattack, especially if there were inaccuracies in reviewers’ comments, as is likely to be the case.
The first thing to do is find a way to handle your anger and disappointment in a way that does not jeopardize what you are trying to accomplish. A mistake we see colleagues make that we believe arises from anger is engaging in pointless debates with editors that, at best, will win intellectual points but won't affect the outcome for the manuscript. The editors to engage are those whose letters encourage revising and resubmitting. When given the opportunity to revise and resubmit, the effective strategy is to deal with every point in the letter from the editor and each reviewer. Dealing with every point does not just mean rolling over – there may be a good reason not to implement a recommended revision. Most reviewers and editors are quite willing to accept, and probably even expect, noncompliance when it is justified by a coherent rationale. Some unsound recommendations may arise from a misreading of the manuscript by a reviewer. In such cases, the error can be pointed out in a nice way, and it is a good policy to change the relevant text to make it less likely that a reader will make the same mistake.
Getting a Head Start
When a manuscript or grant proposal is submitted, a candid self-assessment can be valuable in predicting liabilities that may prove problematic in the review process as well as in carrying out future research. The first author has a vivid memory of his advisor calling a meeting of his research group right after sending off a manuscript for a book. The book provided a theoretical view that encompassed work his advisor had carried out for the previous decade. The purpose of the meeting was to identify and discuss limitations in the theory described in the manuscript just sent off for publication. His advisor was disparaging a book that had not yet made it to the bookshelves, and encouraging others to join in.
Here the goal was to chart the course of future research, rather than to anticipate the need for revision because the book already had been accepted for publication. But for manuscripts and grant proposals, the futures of which are in doubt, there may well be an obvious next study to do that can provide a data-based response to a likely future critique. The most powerful weapon to have in debates with reviewers and colleagues is supportive data.
In summary, rewriting is an overlooked and underappreciated subject that can make the difference in whether a manuscript ultimately is accepted for publication and can make a difference in your impact on the field over your career.
When you write an article, you want it to be high-impact – that is, one that many other scholars will cite. No matter who you are or what you do, there are no guarantees that any particular article will become highly cited. But there are matters to which you can attend that may help to increase the frequency with which your article is cited (see also Sternberg & Gordeeva, Reference Sternberg and Gordeeva1996).
Innovative, Counterintuitive Findings
Often, you cannot tell in advance whether your findings will be innovative or counterintuitive. An example of such a finding is the conjunction fallacy, recognized by Tversky and Kahneman (Reference Tversky and Kahneman1983). Here is the classic example of the conjunction fallacy, as posed by Tversky and Kahneman:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which is more probable?
Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.
Logically speaking, the first option must be more probable than the second, because the second option is more specific and thus includes a smaller number of cases than does the first option. Yet, people tended to view the second option as more probable, even though it is logically impossible.
This was only one of many heuristics in everyday thinking that Tversky and Kahneman pointed out (see, e.g., Kahneman, Reference Kahneman2013). For example, they also showed that people thought a sequence of girl–boy births like the following one, GBBGBG, was more likely to occur than GGGGGG, even though each is of equal probability, since each birth event is equally likely and independent (Tversky & Kahneman, Reference Tversky, Kahneman, Kahneman, Slovic and Tversky1982).
Tversky and Kahneman's success in generating surprising findings did not, of course, come out of nowhere. Rather, collectively, they had a knack for having penetrating insights into human behavior that defied conventional beliefs. If you want to present counterintuitive findings, then it helps to let go of the presuppositions you and others may have about human behavior. Have you observed anything about people that surprises you, perhaps that others have not noticed? Are there puzzling aspects of human behavior that psychologists just have not paid attention to?
Although it is nice to have punchy and surprising findings, some investigators have taken their intense desire to surprise to unfortunate extents and have manipulated or even faked data (see, e.g., www.apa.org/science/about/psa/2011/12/diederik-stapel.aspx). Presumably, very few people take things to such an unethical extent. But the point worth remembering is that surprising findings are great – so long as they are real and they are replicable. You do not have to uncover findings as stunning as Tversky and Kahneman's. Most of us, in our careers, should be happy if we can uncover a few surprises that people in general did not expect.
Methodological Innovation
Another way to have impact is to introduce a methodological innovation. Most of us will not make innovations at the level of Karl Pearson, who provided the formula for the product-moment correlation coefficient, or of Charles Spearman, who invented factor analysis. But we can look for innovations that are more within our grasp.
One of the methodological innovations that has had a profound effect on social psychology is the idea of implicit cognition and its measurement through the Implicit Association Test (IAT – see, e.g., Greenwald & Banaji, Reference Greenwald and Banaji1995; Greenwald, Nosek, & Banaji, Reference Greenwald, Nosek and Banaji2003). The idea of the IAT is to distinguish through implicit associations – of which we are not even aware – our unconscious attitudes toward particular groups. Often these attitudes are different from our conscious (explicit) ones. In the IAT, we are timed while we associate words associated with a certain group either with emotionally positive words or emotionally negative words. In this way, we can compare our implicit attitudes toward various groups. For example, if you are faster at associating white-sounding names with positively charged words than black-sounding names, and you are faster at associating black-sounding names with negatively charged words than white-sounding names, you might conclude that you have some kind of implicit bias. That said, there have been numerous interpretations of scores on the IAT, and it is not clear that it shows bias.
There are lots of ways to make a methodological innovation. One, of course, is to come up with a new method. But another is to show that an existing method applies in a new circumstance. For example, when I did my dissertation, I used a particular form of mathematical model that had been used before, but not in the context of studying intelligence (Sternberg, Reference Sternberg1977). Moreover, I then correlated parameters of the models obtained through the method with scores on conventional ability tests. So it was no great innovation, but it was enough to make something of a difference to the field of intelligence.
Another way to innovate methodologically is to alter an existing innovation. For example, Roger Shepard (Reference Shepard1962a, Reference Shepard1962b) did not invent multidimensional scale, but he proposed a new way of doing it that proved to be influential in the field. So changing an existing methodology to make it better or more useful for a particular purpose also can count as a methodological innovation.
Theoretical Innovation
Another route to getting cited is to propose a new theory or a new variant of an existing theory. Pretty much, this has been my main, if minor, claim to fame. Colleagues and I have proposed a number of new theories, a small number of which, such as of intelligence, love, and thinking styles, have caught on (e.g., Sternberg, Reference Sternberg1985, 1997a, 1997b, 1997c; Sternberg & Grigorenko, Reference Sternberg, Grigorenko, Sternberg and Zhang2001).
Coming up with a new theory may seem like a daunting task, but it does not have to be. First, read up on existing theories of the phenomenon in which you are interested. Do any of the theories seem right to you? If so, then perhaps coming up with a new theory is not in the cards, at least for whatever phenomenon you are studying. But if none of the theories seems quite right to you – as was the case for me when I first started formally studying intelligence – then perhaps you will want to ask yourself what is wrong with the existing theories and what they could do better.
There is no one “right” way to come up with a new theory. Rather, there are lots of ways. I have tended to study things that I do badly, and asked why I have trouble with the things. A second way, as noted above, is to ask what about one or more current theories could be better. A third way is to ask yourself what a theory would look like that said more or less the opposite of what one or more current theories of a phenomenon say. A fourth way is to ask how current theories are incomplete – what kinds of phenomena or circumstances might they not cover. A fifth way is to observe a phenomenon in everyday life and see whether the theories seem to explain what is going on in the real world. A sixth way is to think of some kind of model for the phenomenon you wish to understand. For example, Freud's theory was hydraulic in nature, while Herbert Simon more viewed the mind as similar in some ways to a digital computer. A seventh way is to imagine starting from scratch: If you knew absolutely nothing about existing theories, how would you account for the phenomenon in which you are interested? And an eighth way is to take an existing theory and then change some key parameter of it, asking how the theory would work if that key aspect of it were changed.
In my own case, I decided to study intelligence because I did poorly as a young child on intelligence tests. But I then got my ideas for the so-called “triarchic theory” (Sternberg, Reference Sternberg1988) by observing three students with whom I was familiar who did not seem to fit well with traditional theories of intelligence. One was highly analytically intelligent but not very creatively intelligent. Another was highly creatively intelligent but not as analytically intelligent. And a third was highly practically intelligent but neither very analytically nor very creatively oriented. And there it was – a triarchic theory incorporating creative, analytical, and practical skills.
Coming up with a new theory is not for the timid. You may think that, if you are just starting out, it is way beyond you to come up with a new theory. But really, it's not. Many of the best theories come out of people's minds when they are just starting out (Gardner, Reference Gardner2011; Simonton, Reference Simonton1994). Moreover, sometimes novices come up with new ideas better than do experts, because they are often more flexible and less entrenched in their thinking (Frensch & Sternberg, Reference Frensch, Sternberg and Sternberg1989). So you should not let your lack of expertise discourage you. In fact, if you lack expertise, you may actually be in the best position to come up with a new theory, as long as you have sufficient knowledge to know what you are talking about!
Practical Significance
Another way to have impact is to have findings that are of practical significance – that can make a difference to people's lives. Tversky and Kahneman's work, cited above, has had tremendous practical significance. People reading the work cannot help but realize how susceptible they, like everyone else, is susceptible to heuristics in thinking, such as representativeness, that can lead us to false conclusions. Another researcher whose work has had tremendous practical significance is Erik Erikson, whose psychosocial theory of development (Erikson, Reference Erikson1950) is still taught in child-development and lifespan-development courses, some seventy years after it was first proposed. If one looks for strong empirical verification for Erikson's theory, or for Sigmund Freud's theories, for that matter, one will have to look a long time. There just is not much there. But these theories have gone beyond psychology into the world at large, and have had such practical impact that, for better or worse, they have transformed the way people think and act.
Another theory that has been transformational is that of Howard Gardner (Reference Gardner1983), the so-called theory of multiple intelligences. Gardner suggested that intelligence cannot be fully characterized by just a single number. Instead, he proposed, we need to view intelligences as multiple, including not only linguistic and logical-mathematical intelligences, but also spatial, bodily-kinesthetic, interpersonal, intrapersonal, musical, and naturalist intelligences. Is Gardner's theory correct? The evidence is at best mixed (see Visser, Ashton, & Vernon, Reference Visser, Ashton and Vernon2006). But the theory has given parents and teachers a new way of looking at the children for whom they are responsible, so that they would realize that there is more to a child's intelligence than an IQ. (I've tried, in my own work, to accomplish a similar goal: Sternberg, Reference Sternberg1988.)
You might argue that, hey, none of these theories is correct. Freud, Erikson, Gardner, and others all proposed theories that have holes in them. That is true. But it is also true of all scientific theories, at least in psychological science. None of our theories is “final.” Even the best theories are later elaborated upon or replaced by other theories. If your goal is to propose the final theory of anything, you need to find another career. Science is a progressive discipline. The goal cannot be to find a final theory, but rather to improve upon past theories and to hope that one's future students improve upon one's own theories. We do not propose “true” theories, but rather ones that help move psychological science forward to further understandings.
Heuristic Value for Future Research
Last night – literally the night before I write these words – I gave a talk in Seattle and, in the question period, I was asked the question: “How confident are you that your theory is right?” I smiled and answered, “I'm totally confident that the theory is wrong.” I explained that all theories, at least in psychological science, are wrong to some extent. The most-cited authors in the history of the field – Freud, Piaget, Skinner – all made mistakes in their theories. Even Newton's theories were not quite on the mark. Science is not about positing final theories. Rather, each theory serves as a basis for the next theory. We keep trying, for better or worse, to get just a little closer to the truth, whatever that may be. But we never truly reach it, because we are not omniscient.
One way to increase the influence of your work is to write articles that will help future researchers tackle their own research questions. So you might ask whether your article will provide a good basis for next steps in scientific research. Piaget, for example, had many errors in his theory. As it turned out, children could do things at earlier ages than he thought, and some of the failures in children's answers that Piaget attributed to lapses in reasoning were actually due to lapses in memory. But Piaget's research formed the basis of thousands of studies that later would show Piaget to have been right in some respects, wrong in others. No scientist can get everything right. What they can do is raise interesting questions, provide tentative answers, and hope that their work is an impetus to further research.
An oddity in my own career is that there has been only one time that I came anywhere close to answering a scientific question. It was in graduate school, when I was studying a phenomenon called negative transfer in part-whole and whole-part free recall. For the purposes of this chapter, it does not matter much what the phenomenon is. The important thing is that my advisor, Gordon Bower, and I, came about as close to solving a problem as I think scientists could (Sternberg & Bower, Reference Sternberg and Bower1974). At the time, I thought it was great that we had come close to solving a problem. It wasn't. After our article, people pretty much stopped doing research in that field. There was nowhere obvious left to go with it. I was out of a research area and had to find another one. So I learned that the ideal goal in research is not necessarily to solve a problem for all time, but rather to suggest to oneself and others what the next step in research is.
Clear and Compelling Writing
Well, you may not have any great new theoretical ideas or any wonderful new methodologies. Maybe your particular article has no great practical significance and is not going to generate tons of future research. There still is something important you can do to increase the impact of your article. Write in a clear and compelling fashion. So many articles fail to have impact either because people have trouble understanding them, or because they are not written in a convincing way, or because they are not written in a way to capture the interest of anyone who is not a specialist in the field about which the articles are written. Write in a way that tells a story, a story that anyone would want to listen to. Make the story as clear and compelling as possible. And make sure that, up front, you clearly address whatever question you are asking, and at the end, you clearly state what your answer is (at least for the time being). Good writing alone will never produce an article that shakes up the world, but without good writing, no matter how great your ideas, you risk losing your chance to shake up the world because no one understands what you have said, or cares! Write a lucid article. Show insight into whatever problem you are addressing, no matter how small or large (Sternberg & Davidson, Reference Sternberg and Davidson1982, 1983). Show intellectual leadership in your area, no matter how big or small the area is (Sternberg, Reference Sternberg2003). And if you do these things, you will go a long way toward assuring the article's success.
You have now read about how to write articles for publication in psychology and you are ready to prepare for the final exam. The exam is most likely not some multiple-choice delight to be administered at a cramped desk in a college classroom. Rather it is the actual task of writing articles for publication. This chapter summarizes 50 of the main tips to be distilled from the chapters of this book.
General Tips
1 Ask yourself whether your ideas are interesting to you, and why they would be interesting to other people.
All of us read articles that leave us gasping for breath: How could anyone find the work interesting other than the author? You are more likely to avoid the embarrassment of proposing boring ideas if you ask yourself why others and not just you should be interested in the ideas you have to offer.
2 Realize that new ideas are often difficult to get accepted.
The more your ideas depart from mainstream ways of thinking, the harder it probably will be to get your ideas accepted. Thus, the more the ideas depart from the mainstream, the more effort you have to devote in your article to convincing people that what you have to say is worth listening to.
3 Write the article that emerges from your research rather than the article you planned to write.
It is rare that the research you do leads you to the exact particular outcomes you expected. In your article, make clear what you expected to find and why you expected to find it, but also make clear what you actually did find and how it is similar to and different from what you expected.
4 Explore the data to find out what they have to say and not just what you expected them to say.
Data often are perverse: They come out in a way you did not expect or perhaps never even considered. Analyze your data to make clear how they are both similar to and different from what you expected.
5 A good article tells a story.
You may view story writing as different from professional writing in psychology. In fact, in many ways they are the same. A good psychological article has a story to tell, and develops that story from the beginning to the end, or at least the end as the author knows it.
6 Write the story the data tell as well as how they differ from what you expected.
Readers want to know both what you found and what you thought you would find.
7 Write for the student in Psychology 101.
Many writers grossly overestimate the background knowledge of their readers. Write an article that any bright introductory-psychology student could understand. Be accurate, clear, well-organized, and direct. Write linearly. Stick to material that elaborates your main story and avoid subplots. Avoid jargon where possible, but if you need it, be sure to define it. Also remember that a student in Psychology 101 probably will not be interested in your article unless you make a deliberate effort to interest that student!
8 Make clear what is new in your article.
It often is not clear what is new in an article. Make sure you state it directly rather than hoping readers will see it. Also make clear why what it is that is new also should be important for the reader to know.
9 Write with your referees in mind.
Think of people likely to review your article and the kinds of objections they are likely to raise. They represent many other readers who may see things differently from you and who need to be convinced of the validity of what you say. This means that you need to acknowledge not only the strengths, but also the weaknesses of your article. If you don't spot the weaknesses, you can be fairly confident your referees will.
10 Write in the manner of an hourglass, starting broadly, becoming more specific, and then ending broadly.
You should start your article dealing with the broad questions you will address. Then you should get specific in terms of what you did. Finally, you should discuss broadly the implications of your work. Why is the work relevant to the reader, to the field, to society, or whatever?
11 Make clear how your study tests your hypotheses.
Sometimes a set of hypotheses is presented and research is presented, but it is not at all obvious how the research actually tests the hypotheses. Make clear how it does. A major complaint of reviewers can be that the hypotheses are fine, the study is fine, but the study does not test the particular hypotheses that were posed.
12 Polish and proofread.
Do not expect referees or editors to do your rewriting for you or to tolerate loose, sloppy, or error-laden writing. Polishing and proofreading are your responsibility. As an editor, I have had referees refuse to review articles because they have too many typographical errors or grammatical mistakes. Even if referees review articles with many errors, their impression may be negative, not for the content, but for the sloppy way they are presented.
13 Do not use synonyms, especially for technical terms, just for the sake of avoiding redundancy.
Readers may believe you are varying the words you use because you are referring to different concepts.
14 Make length proportional to contribution.
Journals have limited space. Longer articles therefore consume a valuable resource. Hence you need to be confident that, the longer your article, the greater its contribution.
15 Use a title that clearly expresses what the article is about and that also, if possible, captures attention.
An irrelevant title tricks people into scanning (but rarely reading) something they do not want to read. A boring title may lead them to avoid reading the article altogether. Interest the reader with the title, but make sure the title truly reflects what the article is about.
16 Write an abstract that contains the information a reader most would want to know.
Some people never will see anything more than the abstract. The better the abstract captures the key ideas and findings of your article, the better disseminated your work will be.
17 Accept feedback nondefensively but critically.
Most of the comments you get from referees will help you to produce a better article. Some will not. In revising an article, make the changes that will improve the article. Consider making the changes that, at least, will not hurt the article. But do not make the changes that will hurt it. In your letter to the editor, you can explain why you did not make certain changes. The editor, of course, is free to accept or not accept your explanation, as he or she wishes. Your first reaction to reviews may be to think that the editor and/or referees are fools or simpletons. Consider the possibility that they may simply have seen problems that you failed to see.
18 A good literature review (whether as a general article or as part of an article) defines and clarifies a problem; summarizes previous research in order to inform the reader of present research; identifies relations, contradictions, gaps, and inconsistencies; and suggests next steps.
The literature review thus informs readers of where things have been, where they are, and where they need to go. Do not just present studies. Make clear what they show and why it is relevant to the article you are writing.
19 A good author writes with his or her readers in mind.
Write with your readers in mind. Ask yourself how well they will be able to understand what you write. For example, readers often get confused by pronouns without clear antecedents and imprecise language. What is their background, and how can you write in a way that takes into account both what they are likely already to know and what they are likely not to know?
20 A good article has a take-home message.
Often readers finish an article without any clear idea of what the main point of the article was supposed to be. A good article has a clear take-home message so that the reader briefly can summarize what the article was about. If the reader wanted to summarize your article in a sentence or two, would he or she be able to do so? If not, how can you help the reader to do so?
21 Write for a class of journals.
You should have a journal or class of journals in mind when you write an article. The article then can be targeted to the readership and requirements of that journal or class of journals.
22 Choose carefully the journal to which you submit your article.
You can save yourself a lot of time by choosing a journal that is appropriate in terms of what it publishes and that is likely to accept an article of the quality yours is. As an editor, I reject approximately one-quarter to one-third of the submissions I get because they do not fit the journal I edit. Such submissions waste both my time and the authors’.
23 Do not take reviews personally.
Reviews are of work, not of you. Some reviewers get personal. Ignore such remarks. Read the reviews in the spirit of using them to improve your article. Some scholars have so much self-confidence that they immediately assume that negative reviewers don't know what they are talking about. Maybe they have something to tell you that you need to know before you embarrass yourself publicly. I've often been grateful to reviewers that they caught problems I just never would have noticed.
24 When you resubmit an article, be clear as to how you handled each of the points made in the reviews.
Reviewers and editors get annoyed when they are ignored. You should follow most of their suggestions and indicate how you did so in a resubmission letter. Those suggestions you cannot accept should be highlighted in the letter and you should explain why you did not follow them.
25 Relate what you are writing about to people's everyday experiences.
You capture people's attention and interest when you draw the people in by relating what you are studying to experiences they have faced or expect to face in their lives.
26 Use interesting rhetorical questions.
People often find themselves wanting to answer rhetorical questions, thus drawing themselves into the article they are reading.
27 Say clearly why what you are studying should matter to your readers.
Do not expect readers to see on their own the importance of your work. Make clear why the work should matter to them. As an editor, I advise my authors to write for people in any area of psychology who might, for whatever reason, want to read the article they have written.
28 Review relevant literature in a way that relates it to the argument you want to make.
No one likes to read an unfocused, rambling literature review. Organize your literature review around the ideas that you wish to communicate in your article.
29 Use direct quotes only when necessary.
Use direct quotes only if they are needed to convey the flavor or exact message of an original text. Otherwise, they just clutter up and often obscure your message. If quotes are long, you may need permission to publish them.
30 State your research question(s) clearly.
You need to be very clear just what questions will be addressed in your article. Often you also need to make clear what questions the reader may expect to be addressed that are not, in fact, addressed.
31 Treat differences of opinion with respect.
Treat others the way you would want them to treat you – with respect – even if you disagree with what they say and are convinced that anyone in his or her right mind would see things as you do. Never make ad hominem remarks in an article, that is, remarks that attack a scholar rather than the scholar's work.
32 Keep in perspective the importance of your own work.
Readers tend to be turned off by authors who glorify the importance of their own work beyond reasonable bounds or who fail to make clear the ways in which their own work builds on the work of others. It's better to have other people say how important your work is, rather than you, yourself.
33 Be generous in your citations of others.
No one likes to be ignored, especially referees of articles. It therefore is important to cite relevant past work, especially if someone is likely to be a referee of your article. It further is important to cite work that is not consistent with your point of view in addition to the work that is consistent.
34 Be current in your citations of others.
You do not want to look like you stopped keeping up with the field a decade ago. Make sure your citations are current. It is good to have some classic citations, but if you do not cite recent work, people likely will assume it is because you are unaware of it.
35 Avoid secondary sources.
Extensive use of secondary sources suggests laziness on the part of an author. Cite the primary sources. In this way, you not only show better scholarship skills, but increase greatly the likelihood that what you say people said will correspond to what they actually did say.
36 Actively solicit feedback.
You can avoid a lot of headaches and heartaches if you anticipate the comments referees are likely to make before they get a chance to make them. Ask colleagues to read your work and comment on it before you submit the work to a journal. Most important, ask colleagues who are likely to have at least some disagreements with you. Many people in the age of the internet seek communication and feedback almost exclusively with those who are likely to agree with them.
37 The main elements of design are type of design, how participants were assigned to groups, independent variables, and dependent variables.
Make sure your design section contains the necessary elements.
38 Make clear why the design you chose is appropriate to the problem you have studied.
Do not expect readers to figure out why you designed your study as you did. If there are other obvious possible designs, say why you chose the one you did.
39 Make clear what the strengths and limitations of your design are.
Claim only what you can on the basis of the design you used, and show readers that you know what appropriate claims are.
40 Provide top-down structure.
It often is difficult for readers to follow the line of argument in an article. By providing top-down structure and making transparent how you will organize, you facilitate your readers’ understanding of what you have to say. A common mistake is to get lost in details. Make sure that readers always have a sense of the big picture.
41 Let the story of your data guide your writing of results, rather than an arbitrary order based on statistical tests.
Do not write your results section merely to conform to the order of output in a bundle of computer outputs. Write in the order that best conveys the message you wish to convey.
42 Justify your choices of statistical tests.
Do not assume readers will know why you did the tests you did. Explain why you did them.
43 Be thorough in your reporting of results without being overwhelming.
Often referees will ask for just those data analyses you chose to omit, so include the full set of data analyses you need to tell your story completely. But omit analyses that are irrelevant to the story you have to tell.
44 If you cleaned up your data, be clear as to how you did it.
Say how you handled missing data, outliers, or any other peculiarities in the data, such as non-normality.
45 Be sure your conclusions follow from your data.
It is often tempting for an author to go beyond the data in establishing conclusions, saying what he or she wants to conclude rather than what the data allow him or her to conclude. Draw only the proper conclusions. It is fine to speculate, but then properly label speculations as such.
46 The discussion should make clear what you have contributed, how your study helped resolve the original problem, and what conclusions and theoretical implications can be drawn from your study.
A good discussion goes well beyond summarizing the results: It relates your results back to why you originally did the study, and makes clear the meaning of what you found out.
47 The discussion should be viewed as argumentation, not just as explanation.
Good writing in articles is not merely expository, but persuasive. You are trying to convince readers of the validity of your position, and often, of the lack of validity of alternative positions. However, be realistic in terms of what alternative positions you can rule out.
48 Decide what is worth emphasizing in your discussion and what is not.
Good writing is hierarchical: It makes clear which the important points are, and which are merely the supporting points.
49 Use the discussion to make clear the limitations of your work.
Reviewers will notice them. You take some of the wind out of their sails when you anticipate what they are likely to say in objection to your work. You may also want to use the discussion to rebut, in advance, obvious criticisms of your work.
50 Never end an article with an expression like “Further research is needed”.
What a bore! Of course further research is needed.
You now are ready to write better articles. The tools are right in this book. You need only use them.
Choosing the right journal to submit your paper to for publication is often challenging, and involves a variety of factors. In this chapter, we discuss factors you need to consider in selecting the best publication outlet for your submission, including the fit of the article to the journal, the reputation of the journal, the journal readership, and turnaround times and rejection rates. There is no one journal choice that is best for all psychology papers, but rather the best choice will often depend on the paper, your current situation, and other issues that are specific to each submission.
Introduction
The choice of a journal outlet is almost as important as the content of your paper and how you write it. Although there are many channels to share your work such as through talks, workshops, website discussion groups, and academic-idea exchange sites (and yes, even tweets), journals remain the coin of the realm for scholarly dissemination. Journals often give the broadest distribution of your ideas, along with greater methodological detail, and have the advantage of guaranteeing access for many decades into the future. Moreover, the process of peer review in journals can lead to work that has greater clarity and rigor. However, there are a large number of journals available and the choice of which journal to send your work to can be challenging. In selecting the right journal for your paper, there are several characteristics we encourage researchers to consider:
fit of the article to the journal
reputation and quality of the journal
academic readership and beyond
turnaround time and rejection rate
Apart from characteristics of the journal, there are also personal factors that you might consider when weighing these different characteristics. These include career stage, evaluation window, and opportunity costs in pursuing other papers or projects. We elaborate on the four journal characteristics for consideration followed by relevant personal factors.
Fit with Journal
The fit of an article with the journal is important and you should consider whether the findings, perspective, and style of writing squares with what the journal typically publishes. There are a few types of journals: review journals (e.g., Psychological Bulletin), opinion and perspectives journals (e.g., Perspectives on Psychological Science), study-reporting journals (e.g., Journal of Personality and Social Psychology; Psychological Science), theory journals (e.g., Psychological Review), and methodology journals (e.g., Psychological Methods). Determining beforehand whether the type of paper you have will fit the broad theme of the journal will increase the chance that the journal will consider your paper. If you do not know much about the journal, you can typically find information online, such as the “Description” and “Aims and Scope” of a journal.
Beyond the type of journal you may also want to think about the (breadth of) topics published in the journal. For example, Psychological Science publishes papers on a number of topics, including those in social psychology and cognitive psychology. However, Journal of Personality and Social Psychology does not usually publish articles in cognitive psychology. You can glean this information from reading the description of the journal and reading the titles and abstracts of recent articles. This is a relatively quick way of finding out if your paper is similar enough to those published to be accepted in the journal of your choice.
Another dimension of fit with a journal that you will want to consider is the style of journal writing and reporting. This dimension includes a variety of aspects, including writing style, length of article, number of studies, number of references, length and detail of methods, and reporting of results. Some journals are willing to consider longer introductions and discussions with multiple studies, whereas others might only consider short articles within a word limit. Even within a journal, there may be options for full-length manuscripts and short-report formats. You should select a journal that fits your style of writing and reporting. More often than not, however, you will need to edit the paper to fit the style and requirements of the journal. With this in mind, you will have to weigh the opportunity costs of submitting to a journal based on the amount of editing and rewriting needed to have it considered. For example, if you have a long manuscript with multiple studies, it may be easier to edit it for a journal like Journal of Personality and Social Psychology than Psychological Science, due to the latter's word-length limits.
Another consideration of fit with a journal is the quality and rigor of the types of papers published. Some journals desire work that is more methodologically rigorous, with watertight inferences and conclusions. Other journals are open to more exploratory papers, or papers proposing novel ideas that may have slightly less rigor. It is helpful to recognize whether the work fits a journal's expectation for quality or rigor. We recommend that you find more senior colleagues to provide feedback on the appropriateness of the work for a journal or other journals that you may not have considered.
The idea of fit is constantly evolving over time. We encourage you to keep up to date by regularly reading articles from journals – especially those you are interested in publishing in – to get a sense of the style of papers they publish and the way they are currently written. The types of papers published 20 years ago (which you might have read in a graduate class) are likely quite different from the types of papers published now. Keeping up with journals also provides potential opportunities to receive insights from the editors. During their term, journal editors chart a course for the direction of the journal. Often a new editor will describe the types of articles that are desired by the editorial staff, additional themes that the journal might consider, and changing journal policy toward certain types of papers (e.g., whether to accept replication studies). Finally, you can be on the lookout for calls for proposals or special-issue topics that coincide with what you are working on. These special issue topics often have a higher acceptance rate than regular submissions. For example, there was a call for proposals on a topic that one of us (Tay) was interested in and he ended up submitting two different papers that were reviewed by two separate action editors. They independently accepted the papers, which resulted in an interesting special issue with two papers published by the same lead author.
Reputation and Quality of Journal
In general, it is preferable to publish in journals that have a higher reputation. There are several sources often used to determine the reputation and quality of a journal. From a sociology of science perspective, the psychology community has established certain norms and processes to recognize journal quality and senior members in the field have a good understanding of this information. Seasoned scholars can tell you which are the “A,” “B,” and “C” journals. One journal may be called “cutting-edge,” the next called “solid,” and the third labelled a “back-up.” Some consider the journals below “C” to be “trashy,” but they can have the purpose of publishing failures to replicate, descriptive statistics on a population, and other data that the authors believe are worth disseminating even if they are not cutting-edge findings. This is becoming an increasingly important issue in order to ensure that novel cutting-edge findings are replicable and trustworthy.
Often the top psychology journals are associated with highly respected societies, such as the Association for Psychological Science and the American Psychological Association. We encourage you to approach mentors and advisors on this issue, as they are good sources of information about respected journals. You often will want to aim for the ones with stronger reputations, as these journals are commonly perceived to be of higher quality and likely will count more toward getting a job or receiving a promotion. However, at times, you might select a less prestigious journal that is likely to accept your paper simply to get it out and move to new projects, or because you can disseminate the data more quickly this way.
With the advent of bibliometrics, more psychologists are now turning toward objective metrics as a proxy for journal reputation. Among these metrics is the journal impact factor, which is a frequently used index of reputation and represents the per capita citation of a journal article. In particular, many journal editors monitor the yearly journal impact factor, which is calculated based on the total number of citations over the last two years divided by the total number of articles published over the last two years. A high yearly journal impact factor demonstrates that other scholars are paying attention to the articles published there and citing them in their own work.
Although the impact factor can correspond to the reputation of a journal, many academics now realize the shortcoming of this metric. For one, the impact factor is often confounded with the type of articles a journal usually publishes. Review-type journals tend to have a higher impact factor than journals that publish single-study papers. Moreover, some methodological journals that have high prestige like Psychometrika tend to have low impact factors due to the technical nature of the papers. Perhaps the largest shortcoming of the impact factor is that it is skewed by a few, very highly cited journal articles that are not representative of the frequency at which most articles are cited. There are other limitations to the impact factor as well. For example, new journals might not yet have a metric, scholarly fields differ in size and therefore in citation counts, and editors can “game” the impact factor by encouraging self-citations or pre-publishing papers online first to obtain citations before articles are published on hard copy. Given that the citation count of your article will ultimately be more important than the citation count of the journal where you publish it, it is helpful not to be too focused on journal impact factors. Instead, it may be more helpful to think about journal readership.
Journal Readership and Beyond
One of the goals of academic journal publishing is to disseminate one's research or opinions to enhance the field of psychology. Therefore, it is important to also consider who may have access to, and read, the journal. With regard to readership, you will want to first think about the breadth of the journal readership. More established journals tend to have larger academic readerships and subscriptions from libraries, which provides access to other scholars. While the size of readership matters, it is sometimes better to have a more focused readership that is specifically interested in the topic of your paper. As such, more specialized journals with smaller readership may actually lead to greater attention, adoption, and citations of your paper because readers care more about the work. Apart from the size and interest of the readership, some journals are also more academic as opposed to practice oriented. If you are interested in making an impact on practice and policy, you may want to publish your work in a place read by people beyond academics. For example, organizational researchers may wish to share the latest findings in Harvard Business Review, which has a strong readership among global business leaders.
Another critical part of readership is the ease of access to the journal article itself. One possible consideration – but a limited consideration in our view – is whether a journal is open access. Open access journals can potentially reach a larger audience because they allow any member of the academic community and general public to easily download the article. However, it is easy now to share research through websites, emails, listservs, social media, and research platforms which make publishing in open access journals less critical than they once were. Further, many traditional journals now offer the option to pay an additional fee to make your paper open access.
Another possible consideration is whether a journal has a strong media team to support authors in communicating their research to the news media and general public. Some journals have a very active media team, which increases the reach of your paper. That said, if your paper contributes to the field and is well done, it is likely that it will eventually be shared and disseminated. Thus, the factors above may help to accelerate that process, but they are less of a guarantee than a platinum publication.
Turnaround Time and Rejection Rates
With the ever-increasing pace and pressure to publish, a significant factor to consider is journal turnaround time, from first submission to publication. Different journals vary substantially on this factor. Despite the pressure to publish, the slow turnaround times from when a paper is submitted to when it finally appears in print can surprise and shock young scholars – the periods can require several years! This slowness has become one of the big aggravations in the “publishing game.” Another surprise and aggravation for young professionals is the extremely high rejection rates of many top journals – rates even exceeding 90 percent! When these rejection rates are combined with slow turnaround times, the process can become very frustrating and stressful. Please be forewarned and know what to expect!
Turnaround time.
There are a number of ways that one can learn about turnaround time, which come in three forms. The first turnaround time is from submission of the paper to the time when the first decision letter arrives. The second turnaround time depends on the number and scope of revisions that characterize a journal and a particular editor, and whether the revisions will necessarily require more outside reviews. The third type of turnaround has to do with the queue of accepted papers at a journal, or how long it typically takes a paper to appear in print after being accepted. With the online-first publication process, this last turnaround time has become less important because most journals have a mechanism for the electronic publication of articles before they appear in print.
A few journals report their turnaround times, although this has become much rarer. Based on our experiences from being on editorial boards of psychology journals, the average turnaround time from first submission to first decision varies from about two months to over six months. Nowadays, to be competitive, journal editors are working hard to speed up turnaround times, but it remains difficult because it is hard to get an adequate number of timely, high-quality reviews. In the absence of published turnaround times, one can learn from the experiences of colleagues. Authors can also estimate turnaround times from the publication lag published with the paper in many journals – the time from submission to publication. However, this shows the results only for accepted articles, which might have a longer turnaround time and not take into account the articles that were desk rejected.
Examples from the second author's (Diener) submission experiences can give an idea of the range of turnaround times. With over 350 publications (and many rejections and resubmissions along the way), the best case was a paper submitted one day, accepted the next day, and appearing in print the next month. This is almost unheard of, so do not expect this!
A couple of worst-case experiences highlight how slow journal turnaround times can be. In one case we submitted a paper to a top journal, and after a year had heard nothing. Upon inquiry, it turned out the journal had failed to ever send the paper out for review! To make matters more galling, the editor blamed us for the oversight. In another case, this same high-prestige journal obtained reviews and we revised the paper according to the editor's instructions. However, the editor then got fired by the scientific organization that owned the journal, and they brought in a new editor. This new editor ignored the old editorial letter and reviews, and started the review process anew! Thus, journal submission can be frustrating. Our hope is that our bad experiences serve as contrast anchors, and make your experiences seem like all positive ones.
In contrast to our best and worst experiences is the average experience. Most editors will get back to authors with a review and editorial decision in two to six months. If the paper is not rejected, the decision will almost always be a revise-and-resubmit – an immediate acceptance of a submission “as is” is virtually nonexistent. Frequently, fairly substantial revisions are required, followed by reviews. In the next round, a set of smaller revisions are often requested, which frequently undergo editorial review only, without the input of outside reviewers. Be warned, however, that there is a lot of variation around this modal pattern.
Adding to turnaround times is the fact that many of the top journals often require papers to be revised several times. This process of revision and re-review can lengthen time to publication a great deal. Authors can learn from colleagues about the revision history of particular journals and editors. Some editors take the laissez-faire approach of “it is your paper and it is good enough,” and do not require extensive revisions. Other editors think of themselves almost as co-authors and get involved in every aspect of the paper, including even wording changes. As an example, one of us (Tay) received a word document with almost every other line edited by the editor using track changes. Editors will differ not only in turnaround time, but also in work and hassle for authors. Thus, it can be helpful to know who you are dealing with!
Rejection rates.
The rejection rates of journals vary from about 90 percent down to very few rejections, especially at “pay journals.” At the top journals in many fields, even good articles are rejected, because they are not seen as important enough, novel enough, or rigorous enough. The rejection rate of a journal is particularly important for young scholars who need publications soon because of their careers. The risk of submitting to a high-rejection rate journal is that many months might be required before one learns of a rejection. On the other hand, publishing in such journals can look impressive on a scholar's vita. The rejection rates of many journals are public, and so it is easy to consider this factor before submitting.
Desk rejections.
A feature of many top journals now is that the editor will quickly reject a significant number of articles without sending them for review. If the editor judges that the paper is very unlikely to be accepted at that journal, then he or she will reject the paper outright to save reviewer time and shorten turnaround times to authors. Although this outcome can be discouraging, the bright side is that the author is likely to receive such a rejection quickly, and therefore not have to wait months for reviews to come in. This is actually a good feature of a journal if they have quick desk rejections. Typically, desk rejection occurs in less than two weeks. Authors need to understand that a desk rejection does not mean the article is bad; it just means that the editor thinks it is not above the very high threshold of importance or relevance for this journal. In our experience, a desk rejection can be contested and formal reviews requested, but the arguments must be good for this to happen.
Reviews from other journals.
At some journals, the editor will be willing to accept the reviews from a journal that has previously rejected the paper. The idea here is that the paper might be quite good, but nevertheless rejected at an outlet with a very high rejection rate. Rather than use additional amounts of the scarce commodity of reviewer time, an editor will be willing to take a look at the former reviews and not seek new ones. Occasionally the editor might examine the old reviews and yet require one or two more. However, in some cases editors might be willing to ask for revisions without new reviews, based solely on the old reviews. It can happen that these old reviews were not that negative, but simply not enthusiastically positive enough for the first journal. Thus, journals that are willing to take a look at previous reviews of a formerly rejected paper can sometimes speed up the turnaround time considerably. Information about the use of old reviews is usually contained in the editorial statement of a journal, although editors will always reserve the right to obtain additional reviews after examining the old reviews. Given this, you may wish to include information about previous reviews in your cover letter to a journal submission to possibly speed up the review process, although this is not usually the norm.
Requesting reviewers.
In the submission letter authors can request that editors use specific reviewers who have expertise relevant to your paper. Many editors will often use a name or two from this list, although they often also seek additional reviewers. The advantage here is that you can suggest reviewers who you believe will not be biased against your work, and might even like it. You can also suggest names of people to the editor who you believe would not be good reviewers, either because of personal bias or past conflicts. It is even possible to explain why you believe the reviewer would not give a valid review. Occasionally an editor will seek the opinion of this reviewer to see what he/she has to say. At other times, if personal conflicts are involved, an editor may heed your warning and avoid the named reviewer. Suggesting names of reviewers has little downside, although it is not guaranteed that your request will be granted.
Personal Factors
While the aforementioned factors are typically considered by most scholars, they often weigh the factors differently. Understanding these factors can help you determine what aspects might be more important for you. It can also help you understand the discrepancies between your goals and the goals of your co-author. For example, in the collaborative papers between junior and senior authors, the junior author often emphasizes the need to publish in a high-prestige journal, whereas the senior author emphasizes the importance of speed and fewer hassles. Clearly, career stage is important. For psychologists who are established and well-recognized in the field, the speed of dissemination is more critical than the prestige of the journal outlet, as they do not require more prestigious publication outlets to advance their scholarly careers. However, for a young scholar seeking her/his first job or promotion to tenure, the reputation of the journal should be weighed more.
Apart from career stage, it is also critical to consider when your curriculum vita will be formally evaluated. It may be necessary to get papers out quickly if you are going on the job market soon, or being evaluated for promotion and/or tenure. You should think about the likely turnaround time for a journal and when you will need to have your materials together by for evaluation. Usually, having a paper under review does not count for much. It is certainly better to have a revision at a journal, or even better, an acceptance at a journal. With a shorter time frame, one would favor journals that have a faster turnaround time. Preferably, you should work backward from the evaluation period to give yourself plenty of time to publish your papers, but we also recognize that sometimes data collection and data analyses do not always pan out, and in that scenario, one needs to be tactical in sending it to an outlet with a solid reputation and decent turnaround times.
Trying to get your papers published is a draining process. It is not unlikely – and perhaps very likely if you are in academia long enough – that you have some papers that never seem to get through the publication process. In these instances, you may want to target journals that have a higher acceptance rate to get the paper out to the world rather than burying it in the so-called “file drawer” (or your computer file folder). It can also be motivating to move the paper off your personal revision pile in order to receive a morale boost, and more importantly, to reduce the opportunity cost of not working on new and more promising projects!
There Are Many Journals: Don't Fret
Young scholars can get caught in the trap thinking that they must get their paper accepted by a specific top journal. In fact, in our modern age of technology, your article will be seen and cited in most journals, and the quality of the article will ultimately win out. It is of course helpful for young scholars to publish in the most prestigious journals, but many of their articles can go to more focused journals and their careers can still thrive. The second author (Diener) of this chapter had one of his early papers rejected by a top journal and then accepted by a “second-tier” journal, and that publication has now been cited over 20,000 times!
In the long run your career will flourish based on the entire corpus of your contributions, and will almost never depend on being published in a specific journal. So relax, submit, and please enjoy the research itself, which is why you entered the field in the first place! Don't let rejections bother you for long, and see publishing as a challenge with which you can cope, given effort and perseverance! Keep in mind that even the most famous psychologists have received many journal rejections, and the rejections are not a reflection of your ability or worth.
Throughout our academic careers, we have sought to publish in decent peer reviewed journals. Although the first half of our careers took place during an era prior to the existence of Journal Impact Factors (JIFs), there was a laddered prestige ranking of journals that most psychologists implicitly endorsed, from the most prestigious to the least. When we did our doctoral training in psychology, the most prestigious journals were the general science outlets (Science, Nature, Proceedings of the National Academy of Sciences, Scientific American); very few psychologists published articles in these outlets, but when they did, they tended to be landmark studies. A case in point was Gibson and Walk's (1960) groundbreaking research on visual cliff perception, which was published in Scientific American.
Within psychology specifically, perhaps the three most prestigious journals were all general in nature (i.e., they published articles across subfields); these journals were the Psychological Review, Psychological Bulletin, and Annual Review of Psychology. Whenever there was a canvass of academic psychologists, these were the journals that were generally agreed to be on the top rung of the prestige ladder – the ones scientists looked to for important breakthroughs and where they desired to publish their own work when it was appropriate. These top general journals were followed in prestige by the top specialty journals in each area, for example, Journal of Personality and Social Psychology, Journal of Experimental Psychology, Child Development, and Journal of Comparative and Physiological Psychology.
Although the top-rung journals today are different from back when we were students – new journals such as Trends in Cognitive Science now would be on the list – and several top journals no longer exist, such as the Journal of Comparative and Physiological Psychology, or have been split into multiple specialty journals, such as the Journal of Experimental Psychology: General and the Journal of Experimental Psychology: Learning, Memory, and Cognition, there is still an accepted prestige ladder. Within each subfield of our overall field, scientists generally agree on where they would prefer to publish their best papers and where they expect to find important findings. At the bottom of the prestige ladder are so-called “vanity presses” – journals that, despite claiming to be rigorously peer reviewed, seem to publish almost anything submitted if the authors pay typesetting/proof charges, sometimes referred to as production costs or open access fees. That said, not all open access journals are vanity presses. You have to learn from your colleagues in the field which are prestigious and which are profit machines for the publishers.
Bohannon (Reference Bohannon2013) perpetrated a sting operation on 394 fee-charging open access journals. He used Python to create a version of “Mad Libs” fake papers that were laden with serious problems that should have been readily detected by a legitimate peer review process. Yet, when he submitted these fakes to certain fee-paying journals, they were rarely detected and 80 percent of them were actually accepted for publication by the journals! As noted above, we do not mean to imply that all open access, fee-charging journals are vanity presses; some are well-regarded journals, such as PLoS One and Frontiers in Psychology, which charge authors (or their funders) fees to allow readers to have access to them without subscribing. Other highly regarded journals charge optional fees only for those authors who want to have their paper accessible by anyone in the world without regard to their ability to pay for access (the prestigious Proceedings of the National Academy of Sciences is one of many that do this). Thus, there are some reputable journals in this category of fee-charging open access that have rigorous peer review practices and also have respectable journal impact factors (JIFs) (more about this below). However, as Bohannon's sting showed, many journals in this category are vanity presses in the guise of science.
Many scientists, including some eminent ones, have little understanding of what is going on in the world of journal publishing. In 2015, when we invited a colleague from the field of economics to co-publish with us in the reputable fee-charging open access journal, Frontiers in Psychology, she balked at doing so. She said that in her field of economics a publication in a fee-charging journal was worse than having no publication at all! By the end of our joint project, she came to realize that not all fee-charging open access journals are vanity presses, and some, like Frontiers in Psychology, are rigorous in their review process.
However, authors’ perceptions about journal quality and journal impact matter. (A journal impact factor is a measure of the average number of citations to articles in the journal over the preceding two years; thus, JIFs show how many people are reading a given journal and then citing it – see further discussion below.) We both serve on the editorial board of the journal Psychological Science in the Public Interest (PSPI). As is true of all APS journals, PSPI has a very high JIF even though it is difficult to locate this information, because of reasons we detail later. But PSPI works differently from other journals. Instead of prospective authors submitting unsolicited manuscripts to be reviewed, PSPI does not accept unsolicited manuscripts. Instead, after often-lengthy discussion among its editorial board members, the journal editor invites teams of diverse scholars to produce a consensus report on some issue of great public interest. In a recent discussion, editorial board members generated names of prospective authors to be invited to join a team researching the current opioid epidemic. One board member's comment regarding a possible author was telling: “The journals he publishes in seem decent, but I cannot vouch for the work.” This comment reflects a common assumption by scholars in psychological science; namely, that journal quality is a proxy for the quality of papers published in a given journal. And this is especially true if you are an outsider to the research; after all, what other metric is available to help you decide whether to invite someone to work on a white paper, or participate in a meeting, or submit a review for a granting agency or journal? Most of us on the PSPI editorial board have no expertise in the majority of topics that are published in it – such as the opioid epidemic. So we rely on proxies for quality of potential authors’ research, such as the JIFs of their publications. When we saw that the person who was nominated published in Science and Nature, we were satisfied that the work met the highest scientific standards.
There are a surprisingly large number of vanity presses, and often they have titles that sound as if they publish rigorous science. These journals have repeatedly generated humor by being tricked into publishing nonsensical manuscripts (for a fee). If you have somehow missed these hoaxes such as the Bohannon sting, check out “The conceptual penis as a social construct,” a completely nonsensical article in which the authors claimed to show “Our planet is rapidly approaching the much-warned-about 2°C climate change threshold, and due to patriarchal power dynamics that maintain present capitalist structures, especially with regard to … the connection between hypermasculine dominance of scientific … and the irreparable damage to our ecosystem.” This paper was computer-generated nonsense, yet it was accepted and published by a journal with a legitimate-sounding name, Cogent Social Sciences. There are other equally absurd hoaxes that were accepted and published by other vanity presses that claim to be rigorously peer reviewed.
What does all this mean for budding academic psychologists who seek to build a solid, respectable vita? Junior psychologists should take soundings from experienced researchers to avoid sending their work to poorly regarded outlets, which can be a temptation (for any of us) after the work has been rejected by one or two legitimate journals. We have often wondered whether it is actually better to have no publication on a CV than to have one in a vanity press, as our economist colleague believed. When we have served on search committees to hire professors we have seen some applicants dismissed because their CVs listed vanity publications. Yet, we are aware of the myriad problems in evaluating journal quality. Even the best journals sometimes publish junk science, and retractions.com is littered with the carcasses of papers published in top journals that could not be replicated, or worse, were shown to be fraudulent. And even among the best psychology journals it has been known for decades that if one submits the same manuscript to the same journal that recently accepted it (and published it), there is an excellent chance that when it is resubmitted, (i) it will not be detected by the editor or the reviewers as a paper recently published in their journal, and (ii) after reviewing it, it will be rejected because of serious methodological/statistical problems (Ceci & Peters, Reference Ceci and Peters1982; Peters & Ceci, Reference Peters and Ceci1980, Reference Peters and Ceci1982). So journal metrics based on subjective indices such as perceived prestige or on quantitative measures of citation frequency are no guarantee of a paper's quality.
However, faculty serving on hiring and promotion committees need some way of judging an applicant's merit, especially given that few, if any, of these faculty are experts in the applicant's area – or else why would the department be interested in hiring in this area in the first place? Only the largest departments can afford to hire researchers in areas that are already covered by current faculty. So, while few of our colleagues believe that journal metrics such as JIFs are perfect proxies for the quality of someone's oeuvre, some knowledge is still superior to none, and generally someone who publishes in the most esteemed journals – those with the highest rate of rejection or the highest JIF – is likely to be a better bet than someone who does not. Not everyone agrees with this assessment, which we discuss later under the rubric of undiscovered geniuses.
One evening a chance meeting with a colleague persuaded us to shift our focus from publishing in mid-tier journals to upper-tier ones. Over dinner this colleague said something that proved to be a wake-up call of sorts. We had congratulated him on an article he had just published in Science. A year earlier he also had published an article in Science and a couple years before that he had published an article in Nature. Each of these three articles generated media publicity and created a buzz around this person, resulting in several high-profile attempts by top universities to recruit him (including Oxford University). He explained to us that although he did not publish much, what he did publish was often in the most prestigious general science journals (Science, Nature, Proceedings of the National Academy of Sciences) in which he had a half dozen articles, or in the most prestigious disciplinary journals, in which he had about a dozen articles. He had other publications as well, but not as many as you might guess from the aura surrounding him. That night at dinner he told us that he made a deliberate shift in the way he conducted research and published it, aiming for fewer publications that had higher impact rather than many lower-impact articles. He felt that the occasional article in Science, PNAS, or Nature brought his program of research greater visibility (and funding) than was true of his older style of cranking out large numbers of papers and machine-gunning mostly mediocre journals with them. He felt that making a few important discoveries was superior to generating lots of routine publications that few people read and even fewer cite. He suggested that we consider shifting our own research style from high productivity to high impact.
We must have been prepared to hear his advice because we soon found ourselves asking questions that addressed larger issues, the kind that might be relevant for the most prestigious journals. Although we have not been as successful in publishing in the most prestigious as our colleague, we have had reasonable success. And it has brought us a level of visibility that we seldom got when we published in average journals.
We write lots of papers, between 10 and 20 per year, 5 to 10 of which are journal submissions (the rest are chapters in edited volumes, op-eds, and reviews/essays). Most years we strive to submit at least one of our manuscripts to a prestigious outlet. It is these publications that have been the most influential in our careers. For SJC, his 12 most-cited articles (i.e., the top 3 percent of his publications) account for half of his citations. Was SJC wasting his time publishing the other 97 percent of the articles, given that between them they generated only a half of his total citations? We don't think so. As others have shown (e.g., Simonton, Reference Simonton1997; Sandstrom & van den Besselaar, Reference Sandstrom and van den Besselaar2016), researchers have both a creative potential (some have more of it than others) as well as a personal “hit rate” that reflects the ratio of producing outstanding findings relative to mundane ones; the latter papers are sometimes referred to as “normal science” as opposed to path-breaking or breakthroughs (Kuhn, Reference Kuhn1970). Some researchers are very lucky and have high hit rates, and they manage to hit a home run every two or three studies they conduct; the example of Gregor Mendel comes to mind, as he only published a few papers in his lifetime but these included his path-breaking findings. (We will use “home runs” to refer to papers that get accepted in the Cinderella journals, generate high citations, result in enviable awards and invitations, and/or lead to changes in science or national policy.) Some scholars have hit rates that are not this high; perhaps they hit a home run every 15–20 studies.
Our claim is this: If someone conducted fewer studies, she or he will hit fewer home runs, not more. If they have, say, a 1 in 20 hit rate and they conduct 40 studies throughout their entire career, then they can expect to hit two home runs. If, on the other hand, they conduct 400 studies they can expect to hit 20 home runs. We are over-simplifying, of course, and not taking into account many variables, and assuming linearities where there may be none. And as everyone knows, when you set out aiming to hit a home run paper, the study is designed differently than when you are aiming for normal science; the former is likely to address larger, perhaps more contentious, issues. In contrast, most normal (non home run) studies are designed to illuminate the interiors of an issue and would usually not be attractive to the most prestigious general science journals.
Speaking personally, we are sure that if we slowed down our overall productivity, we would not hit more home runs than we would if our productivity is high; the extra time gained by dropping lower-impact studies will not make our remaining studies better, contrary to what some may think. This is because when we sometimes believe the study we are currently immersed in has the potential to be a home run, we are often wrong. We simply have no guaranteed way of knowing a priori which study will end up being a home run. But the more studies we conduct, the more end up being appropriate for submission to a prestigious journal. The extra time gained by conducting fewer studies does not translate into more time to spend on the remaining studies, thus making them better candidates for home runs. As noted, from the outset the latter are difficult to predict. Spending more time on a study destined to be a “single” will rarely suffice to convert it into a home run. (See our point below regarding multitasking.) The latter emerge out of the morass of attempts, just like Babe Ruth's record-setting home run feat came amidst his record-setting strikeouts. Fewer at-bats would not have increased his home runs.
There is a belief among some academics that less is more, and this belief extends to evaluating positively those academics who publish very little. Some prefer to think of these researchers as undiscovered geniuses, surfing above the mundane types of ideas and approaches that limit the rest of us. At its most extreme, the belief extends to the sentiment that undiscovered geniuses don't need to publish in high-impact journals; they don't even need to publish in any journals. Of course, everyone works at a different pace and has their own research style and we are not suggesting that one size fits all when it comes to journal publishing. But we have seen many colleagues over our long careers who essentially portray themselves as undiscovered geniuses. They publish very little over the course of their careers (particularly, and conveniently, after getting tenure), but they claim that what they do publish is as good as hitting a home run, even though it is not published in a prestigious outlet. Historical consensus takes time before it is acknowledged as a home run.
Only time will tell if they are correct, but we suspect they are not. They will invoke cases like Gregor Mendel, whose few studies’ brilliance went unrecognized for 40 years, until after his death. Or they may point to scholars like Harry Kroto, Nobel laureate in chemistry. He is ranked an undistinguished 264th in the h-index list of chemists because his fame is largely the result of a single breakthrough paper in 1985. Conversely, they may point to highly prolific scholars like John Gray, a contemporary of Charles Darwin, who published nearly 900 works, none of which was influential.
However, the Mendels and Krotos are aberrations, not the rule. In reality, a meager list of publications mainly leads to hitting a meager list of singles, with few if any home runs. We rarely are persuaded by the belief of so-called undiscovered geniuses that they will be posthumously discovered. The reality is that what Sandstrom and van den Besselaar (Reference Sandstrom and van den Besselaar2016) term the “small is beautiful” argument – that publishing fewer papers leads to more breakthroughs – is not empirically supported. In fact, the majority of high-impact studies are produced by productive researchers, and this relationship is found across most fields of scholarship (see Simonton's Reference Simonton1997 examples; see also Sandstrom & van den Besselaar, Reference Sandstrom and van den Besselaar2016 for quantitative evidence). In the physical sciences, for example, quantity and quality are highly correlated, with r =.72 between the number of cites to physicists’ three most-cited articles and the total number of articles they published in their lifetime, a finding documented by many researchers since the seminal analysis by Cole and Cole (Reference Cole and Cole1973) over 40 years ago. And this correlation between productivity and perceived importance holds up regardless of whether the dependent variable is total citations to all articles, or number of citations to the top three articles.
There is another reason we believe in hit rates. Most of us can multitask on several projects at the same time. Working on one does not interfere with working on the others because there is a limit to the amount of time we can concentrate on one project before we lose perspective and need a break from it. When this happens, we switch to a different project, one that has not fatigued us. So, if our options are between doing a single study or multiple studies, if we do the former we will have no greater likelihood of hitting a home run because we will need a break from working on it every so often. Shifting between multiple studies leads to more home runs as well as more singles.
As noted above, another consideration scientists must evaluate concerns the importance of a journal's impact. As we noted above, when we began our careers there were no metrics such as journal impact factors. JIFs reflect the average number of citations to articles published in a journal during the previous two years. For example, if there were 1,000 cites in 2018 to the 100 articles that appeared in that same journal during the two prior years, it would have a JIF of 10. This means that the typical article in this journal would have been cited 10 times. However, as we explain below, the typical article is actually cited far less than the JIF average because the latter is inflated by a handful of highly cited articles.
The concept of JIF was developed in the early 1970s as a way of helping libraries decide which journals they should subscribe to in times of limited resources. JIFs came to be viewed as a proxy for the importance of a journal – those with higher JIFs are seen as more important than journals with lower JIFs. Yet, the evidence suggests this is an unwarranted generalization. The high JIF journals are high for the reason elaborated below – namely, only a handful of articles in them are highly cited, but this handful boosts the overall journal mean, giving others articles in the same journal a free ride.
Over time, JIFs have become proxies for judging individual scientists and their fields (e.g., publishing in a journal with a high JIF is taken by some to indicate a better scientist, a more rigorous field, or at least a better article than one published in a low JIF journal). None of these conclusions holds up to empirical scrutiny. Sønderstrup-Andersen and Sønderstrup-Andersen (Reference Sønderstrup-Andersen and Sønderstrup-Andersen2008) showed that JIFs were correlated with scientists’ subjective rankings of journal quality: They found that 80 percent of Danish medical researchers revealed that JIFs influence journals in which they prefer to publish; there was a significant correlation between how these researchers ranked journals and the journals’ JIF scores. A quick Google search will reveal that a lot has been written about the risks of using JIFs as a proxy for either the quality of the articles in a given journal or the quality of the authors who publish in it. For example, Brembs, Button, and Munafò (2013), among others, have shown that, for the journals they examined, the relationship between JIF and citations is almost negligible. In their words:
Overall, consistent with the citation data already available, the coefficient of determination between journal rank and citations was always in the range of ∼0.1 to 0.3 (i.e., quite low) … journal rank as established by JIF violates even the most basic scientific standards, but predicts subjective judgments of journal quality.
Several years ago, one of the contributors to this volume (Roediger) wrote an interesting essay in the APS Observer in which he described his own citation metrics and showed that some of his most impactful studies were not published in the highest JIF journals. For example, Roediger (Reference Roediger2013) reported that one of his highly cited papers was published in the Journal of Experimental Psychology: Learning, Memory & Cognition in 1995. At that time, this journal was ranked 18th among empirical journals in psychology, JIF = 2.92. Eighteenth is not awful, but it is not close to the top psychology journals, which at that time had JIFs in the 15–16 range (and today the top psychology journals have even higher JIFs). Roediger's 1995 article in this journal was cited about 120 times per year during the 17 years following its publication, a whopping 2,134 times. However, he also published another article in this same journal that was cited only 1.3 times per year! In Roediger's words: “The point is that it is the paper, not the journal, which counts for evaluating individual papers.” We would add to this sage advice that, if anything, articles published in low JIF journals should be given even higher evaluations since they did not have the advantage of being published in a prestigious outlet where most scholars first look for important contributions. Such articles garnered their accolades despite sailing against a headwind. Ultimately, what matters is how many colleagues are influenced by a piece of work, which is reflected in the number of cites to it. If a study published in a low JIF journal can garner huge numbers of cites, its JIF score should not diminish its importance. If anything, it should enhance it.
The problem of using JIFs for hiring, funding, and program-evaluation decisions became sufficiently acute so that in 2012 many journal editors and scientific societies endorsed the Declaration on Research Assessment (DORA), which described myriad misuses of JIFs. Soon the top editors and the societies that published journals became signatories to DORA, including all of APS's journals as well as many of the most prestigious scientific journals. A big danger in the use of JIFs by grant reviewers, award committees, and hiring committees is that they are misleading when it comes to evaluating individual articles and individual scientists for a variety of reasons, one of which relates to the fraction of papers the journals publish that are reviews. This is because the portion of review articles published by a journal affects its JIF due to their very high citations. When publishing an empirical article in a journal that also publishes review articles, the empirical article gets a benefit that is undeserved because the JIF is carried for that journal by the review articles it publishes, not by the typical empirical articles it publishes. Only a fraction of the articles in a high JIF journal have above-average cites.
As Curry (Reference Curry2012) found:
Typically only 15 percent of the papers in a journal account for half the total citations. Therefore, only this minority of the articles has more than the average number of citations denoted by the JIF. Take a moment to think about what that means: the vast majority of the journal's papers – fully 85% – have fewer citations than the average. The impact factor is a statistically indefensible indicator of journal performance; it flatters to deceive, distributing credit that has been earned by only a small fraction of its published papers … We spend our lives fretting about how high an impact factor we can attach to our published research because it has become such an important determinant in the award of the grants and promotions needed to advance a career. We submit to time-wasting and demoralising rounds of manuscript rejection, retarding the progress of science in the chase for a false measure of prestige.
Review articles get cited much more frequently than empirical articles but this is not necessarily an indication of their greater importance. Some empirical journals limit the amount of literature reviewing and insist on brief introductions and detailed methods. Such policies tilt review metrics upwards because when authors are not permitted to precede an empirical study with a comprehensive literature review, they will “outsource” their literature review by citing a review article (e.g., “see extensive review by …”). This is why some reviews have high cites. Of course, many review articles have cites because they arrive at breakthroughs. But none of this can be foretold simply from looking at JIFs.
Are there alternatives to JIFs? There has recently been a campaign to replace JIFs with measures of more immediate impact (Brembs et al., Reference Brembs, Button and Munafò2013), given that it takes years to accumulate cites in journals. The most widely used new measures are media buzz and network indices such as Altmetrics and Mendely. The former is based on algorithms that detect and differentially weight the number of times a study is mentioned in various types of media (newspapers, blogs, tweets, Facebook, etc.). Such measures are too new to provoke consensus, either pro or con. But a recent experience may be interesting to graduate students and new Ph.D.s who might be considering boasting about their paper's high Altmetric score. A graduate student we know is a master of this new technology and he offered to mentor us on its use and show us how we could “game it” to increase the Altmetric score on one of our own publications. In a conversation with two of the most senior colleagues at Cornell, we mentioned this student's advice to us and he shared his own high Altmetric scores. Interestingly, both of these senior colleagues reacted negatively to this information. They seemed to regard attempts to ratchet-up media publicity as not something scientists should undertake. One of them stated he would probably react negatively to an applicant who listed his Altmetric scores next to his publications on his CV (Altmetric will send registered users a badge indicating the score, which appears at the top of its PDF), which increasing numbers of applicants do. Our own college both requires that in our annual reports we list information about JIFs for each publication and also allows (not yet requires) Altmetric scores for each article.
Academic journal publishing can be a confusing game, and making matters even more daunting is the pace at which things are changing in the publishing domain. We have discussed some considerations that hopefully will prove helpful to young academics as they begin navigating these issues on their own and building the publication record they seek.
As with many transactions, the internet has transformed the journal submission process and has significantly impacted how peer review gets accomplished. Over the past 20 years, web-based software has enabled fast digital submission to replace postal mailings and has enabled editors to expedite – at least somewhat – the acquisition of expert reviews. Practically all journals now use web-based submission and review, supplemented with email communication.
Online submission and review provides the potential for a faster overall process, but the human portions of the process still require time. These include not just the various portions of the editorial peer review process but also the author's submission process. Online submission entails more than just uploading a manuscript; it includes providing metadata about the corresponding author, co-authors, the research, and the manuscript. Although streamlining the submission process is a topic of frequent discussion (e.g., Hartley & Cabanac, Reference Hartley and Cabanac2017), be prepared to dedicate some time to online submission.
One useful aspect of most submission systems is that you can practice submitting in advance, in order to discover what information you will be asked to provide. So if you prefer, you can go through the various submission steps, providing whatever answers you like, and then delete your practice submission at or near the end.
Another useful aspect of most systems is that they will save incomplete submissions, at least for a period of time. This allows you to log out partway through the submission process and then log back in later in order to complete your submission.
Not all submission systems are mobile friendly yet. Some sites are still best accessed via desktop computer.
Before Submission
Before you begin the process, read the submission guidelines of your chosen journal thoroughly to be aware of all the requirements. The submission guidelines will not detail every question you will be asked during submission, but they should alert you to all the critical ones.
Note the journal's publishing policies. For example, many journals disallow duplicate submission. Also, not every journal will consider manuscripts that have been posted as preprints; if you have posted your article as a preprint or want to be able to do so, be aware of your chosen journal's policy on this issue, and if it's unclear, be sure to inquire with the journal.
Be aware of the journal's terms for publishing. The traditional model whereby the author grants copyright to the journal is not as prevalent anymore; nowadays journals more commonly ask authors to grant simply the right to publish, allowing the author to retain copyright. Journals frequently request the exclusive right to publish since they don't want someone else to publish the same article.
It's important to note the rights that journals allow authors to keep. For example, journals frequently allow authors to post certain versions of their manuscript in other places, including author websites, employer websites, and/or third-party repositories. Also, journals often allow authors to reuse their article for educational purposes (i.e., in classes they teach) or in books that they author or edit. Be aware of all the terms of publishing and ensure that your co-authors approve of them. If any terms are unclear, ask the journal for clarification.
Pay attention to article formatting requirements and to length restrictions, including on the numbers of figures and tables. Such restrictions exist for a variety of reasons, and if you have the need to exceed them, inquire with the journal editor in advance of your submission.
Many journals only consider submissions in specific file formats such as Word documents or PDFs, while researchers may sometimes prefer writing manuscripts in specialized software such as LaTeX or R Markdown. Be aware of format requirements in advance; they are often in place to ensure the accurate portrayal of your manuscript, and if you need to use a specialized format, it's better to discuss that in advance, too.
Mathematical notation is one example of an aspect to investigate before submission. Journals may have preferences or requirements about how authors generate mathematical notation, in order to ensure the math gets conveyed accurately to reviewers.
Note whether your chosen journal may also offer innovative review options, such as registered reports or results-free review. Registered reports (Center for Open Science, n.d.) are research articles that get peer reviewed before the experiment is performed: The study is conceived and designed, and then submitted as a registered report to a journal that conducts peer review on the study design; if the journal accepts the design, then you the researcher conduct the experiment and draft the final report, which is published by the journal based on the earlier acceptance.
Some journals may offer results-free review, which doesn't occur before the experiment is completed but does blind the journal editor and reviewers to the results. As a result, the decision whether to publish is made based only on the study design. If you are interested in a registered report or in results-free review, ensure that you choose a journal that clearly outlines how to approach each of these options.
Submitting Your Manuscript
Depending on the journal and its requirements, manuscript submission can take significant time. Be prepared to provide a large amount of information about the corresponding author, the co-authors, the research, and the manuscript.
Online submission systems typically require any submitting author to have an account, but if you have reviewed for the journal, you probably already have an account. Unless you want to end up with multiple accounts and passwords, try to use a single account for each journal. If you have never submitted to or reviewed for your chosen journal, you will almost certainly need to create an account in order to submit your manuscript.
Author Information
Required personal information typically includes name, email address, and institution, but may also include geographic location and address and other information. If you are unsure why you are being asked for particular personal information, inquire with the journal to find out why. You will need to know all your co-authors’ information, too.
An Open Researcher and Contributor ID, or ORCID, is a recent innovation that assigns a unique alphanumeric identifier to any researcher who registers for one. It is a free service, available at www.orcid.org, and its use prevents confusion of your name with similar or identical names of other researchers. Many journals now allow submitting authors to specify their ORCID, and some journals actually require it. Not only can ORCIDs be used in peer review, but when a manuscript is accepted, they typically propagate through to the published article, as well as external abstract and index organizations such as the National Library of Medicine's PubMed index.
It is important to convey authorship of the manuscript accurately. Be aware of the journal's definition of authorship and requirements for how to portray different aspects of authorship. Also be aware of the journal's requirements or preferences for crediting nonauthor collaborators. It is highly beneficial to have agreement with all your co-authors and other collaborators about your manuscript's authorship before submission.
An emerging standard for specifying researcher credit is the Contributor Roles Taxonomy, or CRediT (Brand et al. Reference Brand, Allen, Altman, Hlava and Scott2015). A few journals and at least one submission system software provider have implemented this taxonomy, and others will likely follow. The roles include conceptualization, data curation, formal analysis, funding acquisition, investigation, methodology, project administration, resources, software, supervision, validation, visualization, writing the original draft, and reviewing and editing the manuscript. Depending on the journal, not all these roles will necessarily qualify as authorship, but many of them will. Pay attention to your chosen journal's guidelines for authorship, and be prepared to take advantage of the journal's submission system's options for specifying contributions.
Manuscript Information
Required manuscript information will probably not be limited to the title. You will likely also be asked to enter the abstract, a running head, and keywords; keywords may be your own, or you may be required to choose from a list. All this information helps expedite the vetting and potential assignment of your manuscript to a handling editor and to reviewers.
You will also likely be asked to specify a manuscript type, which you will need to choose from among the journal's specified types. Your chosen journal will have its own specifications for each of its types; so review those carefully and submit accordingly.
You may be asked to confirm certain attributes of your manuscript, including word count, reference count, number of figures, and number of tables. The journal may have restrictions on some or all of these; so be aware of those restrictions before you submit, and be accurate in your reporting of these attributes, as journal staff may verify these against your manuscript.
Research Information
Most journals subscribe to a set of guidelines for ethical research practices, and you may be asked during submission to confirm your adherence to these guidelines. A common example is the Declaration of Helsinki (World Medical Association, 2013), which seeks to ensure human participants have given their informed consent and to ensure that a research ethics committee has reviewed and approved the research protocol to ensure proper treatment of human participants. Your chosen journal may follow other or additional ethical guidelines, which should be detailed in the submission instructions; be sure to read these and ensure you can confirm your research's adherence to them.
Another common ethical guideline involves declaring any conflicts of interest. A conflict of interest is any position a researcher holds, including investments, that could benefit from a decision that the researcher makes, in this case the researcher's conclusions in the submitted manuscript. Because the judgment is often a perceived one, it is common practice for a journal to ask for any and all potential conflicts of interest. Being transparent about all potential conflicts of interest is highly recommended to avoid future allegations that you are, or any of your co-authors is, trying to hide something about the research report.
A journal may ask for confirmation that other ethical research practices were followed. It is important to be transparent and forthcoming about all the practices that the journal asks about; doing otherwise could result in ethical problems if your manuscript is published.
Submission systems increasingly request or require information about research funding. Providing this information to your chosen journal allows the publisher to help authors of accepted manuscripts comply with grant funding requirements. Many journals now cooperate with grant funding agencies and organizations to provide data about and access to published manuscripts. For example, many publishers will deposit in PubMed Central a version of a National Institutes of Health-funded manuscript, in order to comply with NIH requirements. Also, a growing number of publishers participate in the Clearinghouse for the Open Research of the United States (CHORUS), an initiative that enables publishers to help authors comply with funders’ access requirements. CHORUS’ parent organization, CHOR, is beginning to work with funding agencies in other countries (Girard, Reference Girard2017), such as Japan and Australia, to develop similar services. So providing information about your research funding will be beneficial to you and your funder, if your manuscript is accepted. Common requested or required information includes the funder name and your grant number(s).
Some journals now ask for explicit metadata about your research design. For example, you may be asked to enter information about your dependent and independent variables, or about any observations you excluded from analysis, as well as why you excluded them. You may also be asked to justify your sample size. The journals that ask for this type of information typically need it to help evaluate your manuscript.
You may also be asked to specify how your research data will be made available, whether to the journal editors, to the journal's reviewers, to journal readers, or to the public. If your funder does not have an open data requirement, your chosen journal may. And even if the journal doesn't require open data, it may require that an author supply data to an editor or a reviewer on request. The Peer Reviewers’ Openness Initiative now advocates for open data (Morey et al., Reference 216Morey, Chambers, Etchells, Harris, Hoekstra, Lakens and Zwaan2016), including the availability of those data to peer reviewers, and a growing number of researchers will not review for a journal unless it agrees to provide them with author data. As with your manuscript, reviewers must pledge to keep your data confidential and use it only for review purposes.
Peer Review Preferences
Many journals ask for author input on some details of how the peer review process will proceed on the manuscript. For example, many journals allow an author to recommend or oppose one or more of the journal's handling editors. Also, many journals allow an author to recommend or oppose certain reviewers. Journals may ask you for a reason to support your recommendation or your opposition. Specifying these preferences may constrain the journal's editor-in-chief in her or his handling of your manuscript; so it's best to use this option judiciously. Be aware that editors may choose to ask reviewers whom you oppose to review your manuscript in order to learn what these reviewers think. Therefore, do not view your opposition to a reviewer as imposing a definite constraint on the editor. Some editors even routinely write to opposing reviewers to get a balanced picture of reactions to articles.
Some journals allow the option of open peer review. Although open peer review can mean a variety of things (Ross-Hellauer, Reference Ross-Hellauer2017), frequently it means that the journal's chosen reviewers agree to reveal their identity and have their reviews published with any accepted manuscript. Because the reviews will be understandable only by seeing the submitted manuscript, as well as any submitted revisions, agreeing to open peer review frequently means agreeing to have all versions of your manuscript published, along with the signed expert reviews, as supplements to your final accepted version. If the journal offers open peer review and you choose it, it usually happens only if all the reviewers agree to it.
Another type of open review is post-publication commenting by readers. If your chosen journal offers this, you may be asked to agree to it during submission. There may be other types of open review, and if the journal offers them or engages in them, the journal should be transparent about this.
Journal Policies and Procedures
Most journals have other policies related to their research missions and values, and it's important to be aware of, follow, and confirm adherence to those policies so that your manuscript can move forward to be considered for publication.
For example, if the journal disallows duplicate submission, you may be asked to confirm that your manuscript isn't presently under consideration by another journal. You may also be asked to confirm that you have the agreement of all authors listed on the manuscript, to submit to this journal.
Also, many journals allow authors to provide materials supplemental to the research paper, either for review or for publication, and if they do, then you may be asked to specify what those are before you upload them. Examples of supplemental materials include computer code and supplemental methods.
You may also be asked to agree to the journal's terms of publication.
File Uploading
Frequently the final step in submission is uploading the actual files. Some journals may ask that you upload all parts of the manuscript as a single file; others may ask that you upload figures or tables in a batch or as single files separate from the main manuscript. If the journal allows supplemental materials, you should be given the option to upload those. And some journals still allow upload of a file containing a cover letter, though some may ask for the cover letter to be pasted into a text field elsewhere in the submission process.
Before beginning uploads, check each of your files carefully. Most submission sites also allow you to proof your submission once uploaded, and that step should not be skipped either. It will smooth the pathway to the peer review stage if you proof both before submission and during submission.
Peer Review
Communication
Communication with the journal will typically be by email. Some of your emails to the journal may be read and handled by journal administrative staff; others may be passed on to the editor(s) handling your manuscript.
Although your emails sent to a journal will normally be via an email server, emails sent to you from the online submission system sometimes get treated as spam because they come from a web server rather than an email server. If you do not receive a confirmation of your submission soon after completing it, check your spam filter and/or with your system administrator to ensure that you are receiving all emails from the journal. Your spam filter or system administrator should be able to “whitelist” the journal submission system's web server.
Status of Your Submission
Most systems will show the status for your submission, but depending on the system, the displayed status may vary in its precision. For example, a system may or may not differentiate whether your manuscript has received agreement from the editor's preferred number of reviewers, and a system may not reveal to you how many reviews are outstanding at a given time. Although authors may prefer to know those details, the details could be misleading because if initial reviews do not provide enough information for the editor to make a decision, that editor may seek additional reviews. So if you are concerned about the status of your submission, it's best to inquire directly with the journal via email.
Triage
Journal editors routinely triage submissions to ensure they meet basic criteria. Examples include fitting the journal's mission, writing language quality, and research quality. The triage phase is essential to the healthy functioning of a journal because it ensures that the editor isn't burdening reviewers with submissions that shouldn't be in the journal.
If your manuscript gets rejected in triage (aka, desk rejection), attend carefully to the reasons given. If an editor desk rejects a manuscript because it doesn't fit the journal's mission, that editor may suggest more appropriate journals for your manuscript.
The Decision
Once the journal editor has received sufficient reviewer feedback to make a decision, the editor will render the decision through the system, which will send you a decision email. Other than accept and reject, journals can render a variety of revision decisions, such as major revision and minor revision.
The decision itself will be accompanied by a decision letter that summarizes the reasons for the decision, and usually comments from the expert reviewers will also be provided. Regardless of the decision, it's essential to read all the correspondence provided; even acceptance letters and their accompanying reviews can have constructive feedback. Note that submission systems frequently allow reviewers to also make confidential comments to journal editors; as an author, you will see the comments intended for you.
If the journal requests a revision, the decision email will usually contain instructions on how to submit your revision. Journals typically ask that a revision be submitted not as a new submission; this allows editors to consider your revision in light of your original submission and potentially to send the revision to the same reviewers who read the original. So be sure to follow resubmission instructions carefully and ask for help or clarification from the journal if needed.
You may also be given a deadline for resubmission. Pay careful attention to such deadlines and, if an extension is needed, be sure to request from the journal before the deadline, as extensions are easier to implement beforehand. Journals typically do not allow submission records to remain open indefinitely because that creates an administrative burden.
A journal requesting a revision may ask for a wide range of revisions, including substantial ones but also minor ones. Addressing all the requested aspects, no matter how large or small, will smooth the pathway for consideration of your revision and may save time later if your manuscript gets accepted.
Submission of a Revision
Follow the decision email's instructions for resubmission; if you misfile the email, be sure to ask for help from the system and/or from the journal. It's in everyone's interest that your revision be submitted in the way that the journal's system expects.
Journal systems typically allow you to respond to reviewer feedback, either in a cover letter or in a separate document. Follow the journal's directions on how to do this, as each system will have its own way of showing your comments to the reviewers.
Rejection
Sought by no one but granted to practically everyone at some point, rejection can provide important information to getting your manuscript published. As with any other decision, be sure to read the entire letter and all attached reviews. Frequently a journal's rejection will recommend at least one course of action, if not several, and sometimes that will include a recommendation about another journal to submit to.
If you wish to rebut a rejection decision, follow the journal's instructions for doing so. If the journal has no published instructions for rebuttals, inquire with the journal.
Some journals will offer rejected manuscripts the opportunity to be submitted to one or more affiliated journals. This is called a journal cascade. Many cascades will allow you or the journal staff to click a button or link to easily submit your rejected manuscript to another selected journal. Frequently, this process will allow the reviews from the first journal to be shared with the editor of the second journal, which can be helpful to authors, journals, and reviewers.
Be aware of the submission requirements and business model of the next journal in the cascade. If your initial journal choice was a subscription journal, the next journal may be an open access journal, which will require payment of a publishing fee upon acceptance. Or the next journal may have substantially different ethical guidelines. A good journal cascade should make you aware of the differences among journals, but it's your responsibility to be aware of the requirements of any journal you submit to.
An emergent tool to assist resubmission to another journal is the Manuscript Exchange Common Approach Project (MECA, 2017), which is an exploration among several submission software providers to develop standards to transfer certain basic submission information from one journal to another. This project expects to allow journals to offer authors receiving rejection decisions the opportunity to transfer some of their submission information to another journal participating in the project. If you elect to do this, you would likely need to provide additional submission information to the subsequent journal. The project's intention is to save authors time retyping certain basic submission information about themselves and their manuscript. The MECA tool could be implemented by late 2019 (Woolston, 2018). If you are offered this option, as always be sure to ask your initial journal any questions.
Acceptance
If you receive the good news that your submission is accepted, the process is typically not yet complete. There will likely be a number of administrative steps needing your attention before your manuscript gets prepared for publication. It's worth taking care of these expeditiously to keep things moving forward. Delays on your part will delay publication.
The file types and file specifications that journals require for publication may differ from what they require for peer review. For example, frequently there will be requests for higher-resolution artwork. And if you submitted a PDF for peer review, you will usually be asked for a word processor file or something similar, to prepare for publication.
There will also be forms to complete, usually including some sort of publishing agreement. Note that many of these forms will be online and will often require, instead of a handwritten signature, an electronic signature, which is any electronic process or symbol by which an author indicates her or his intent (Electronic Signatures in Global and National Commerce Act of 2000). This can be checking a box, typing your name, or any other process by which a logged-in author indicates intent. Your journal will indicate how you should sign electronically.
A publishing agreement may include the transfer of copyright, but frequently publishers now allow authors to retain copyright and instead sign an exclusive license to publish, giving the publisher that exclusive right. But open access journals and articles are usually published under one of the types of Creative Commons licenses, which may be found at creativecommons.org.
If you are publishing in an open access journal, then there will probably be a publication fee to pay, and you may be asked to do so upon acceptance.
If your journal recognizes certain open science practices, you may be asked for information about that upon acceptance. For example, a number of journals award badges to recognize open data, open methods, and/or preregistration; if you haven't already, you may be asked to provide some details about these practices before your manuscript gets prepared for publication.
At acceptance, you may also be given options for depositing your accepted manuscript and/or final published version in certain repositories, per requirements of your funding organization. Be sure to take advantage of these services, as a growing number of journals have developed smooth repository-deposit processes.
Recordkeeping
Most submission systems do not keep digital records indefinitely. They may archive review records just a few months after a final decision on your manuscript. So be sure to keep digital or paper copies of all versions of your manuscript and any peer review correspondence. You may also wish to offload from the submission system any letters or reviewer comments not contained in the actual email correspondence.