No one is entirely sure how antidepressant medications work. One widely held theory is that the drugs modify the availability of various neurotransmitters, and consequently, the experience of depressive symptoms. Evidence for this mechanism is disputed, as we discuss in the last of our case studies.Reference Kirsch, Deacon, Huedo-Medina, Scoboria, Moore and Johnson1 But regardless of how they work, antidepressants are some of the most popular drugs in the world, and especially in well-off countries with extensive healthcare systems. There is also an immense diversity of such drugs: In the United States, the Food and Drug Administration has approved dozens of medications for the treatment of depression, in seven drug classes.2
With so many treatments available, it is tempting to ask which are most effective. Clinicians do this on a patient-by-patient basis, prescribing one drug after another, in combinations and various doses and often alongside psychotherapy, in hopes of finding individualized solutions to the vexing problem of depression. But a clinical researcher might try to answer the same question through a statistical analysis of group results. The form of the research question might be something like this: “Is medication A or medication B associated with greater reductions in depressive symptoms?”
Just as architectural form in many ways determines the function of a building, so in clinical research, the form of a research question has enormous consequences for the design of a study.Reference Porzsolt, Wiedemann and Phlippen3 In the case of our antidepressant question, the form of the question encourages an exclusionary study design. In order to make a controlled comparison between the two drugs, researchers would likely need to exclude study participants who had already tried the antidepressants in question and potentially other drugs, or people who had tried antidepressants and not responded. Participants with comorbidities – which would be the majority of the population treated with antidepressants – could create confounding variables; they too would be excluded. Pregnant people, those without adequate transportation, and anyone who might not adhere to the study protocol would also be trouble in this data set. With so many exclusions, the study group would probably not be very representative of the broader antidepressant-using public.
The form of this research question also leads to a study protocol heavily reliant on randomization. From a scientific study perspective, randomization is the best way to determine which participants will receive antidepressant A or antidepressant B. Yet, as noted above, antidepressant use doesn’t work this way in clinical practice. Patients aren’t randomly assigned drugs to try; physicians treating depression are extremely careful in experimenting with drugs and psychotherapy. Whether treatment A is superior to treatment B in a clinical trial will do little to usefully inform outcomes for any given patient, whose own experience will have to determine which combinations of drugs and therapy, if any, produce the results they need. Study outcomes may be statistically significant, but they will be clinically insignificant because the initial question is simply not the sort that could result in information useful for patient care.
The case of the two antidepressant medications speaks to two broad themes of this chapter: designing methodologically sound research questions and designing socially useful research questions. The form of a research question constrains the process that will be used to answer it and the goals we can hope to achieve by reaching our answer. It is therefore imperative that researchers ask well-formed questions and, even more basically, interrogate the values underlying the questions they choose to ask. Research questions are not neutral; what sounds, in the first place, like an interesting question follows from the values we hold.
Values can be a product of conscious reflection, but they are also informed by the unconscious or implicit biases investigators bring to the formation of research questions. Confirmation bias leads us to ask questions that will tend to produce results we already believe to be true. Stereotypes induce us to ignore certain populations or treatment options – for instance, a researcher choosing to compare two antidepressant drugs via randomized controlled trial (RCT) may be (unconsciously) beholden to negative stereotypes about psychotherapy and so assume that it is irrelevant to treatment. Intellectual bias plays a role: Perhaps this same researcher has a strong allegiance to the role of neurotransmitters and wishes to show – or simply assumes – that emotions are entirely products of brain states that can be predictably reshaped through chemical manipulation. Then too, structural incentives influence the formation of research programs. Disciplinary specialization narrows the range of questions that researchers think to ask, while the pressure to publish induces researchers to focus on the sorts of questions that journals favor, even if these are not the most useful questions to ask. Journals prioritize statistically significant findings, so researchers may prioritize questions that have the greatest potential to produce such findings, regardless of their social utility.
That research questions are often poorly framed – or, worse, that they may be chosen with an intended conclusion in mind – is not news to practicing scientists.Reference Mayo, Ow and Asano4 Criticisms of research commonly include challenges to the formulation of the study question. In this chapter, we’ll discuss some foundational methodological concerns surrounding the formation of research questions before turning to investigator values and biases. Values and biases are inevitable, which is why we need rigorous methodological guardrails. But, just the same, we should try to cultivate the right values in the first place. These include equipoise, a concept we introduced in Chapter 3. Investigators must approach research with the view that their questions are unsettled – that there is genuinely science to be done. And because biases and values are inevitable, we urge that researchers be reflective about the social context of their work, so that bias can be turned in a prosocial direction. With this in mind, we discuss guidelines for asking well-formed research questions, proposing a framework that explicitly embeds the social context of research in the process of question formation.
The ethical demands of the health disciplines enjoin us to help those in need to live well. We can seek to fulfill this duty by asking questions that can provoke socially useful answers.
Some Methodological Considerations in Research Question Formation
Well-defined research questions are specific and include clear boundaries of scope. Such clarity facilitates feasibility, which is an important value for investigators: We want to undertake projects that we can actually complete. But other values matter too. Social impact matters, so we want the scope of research questions to enable projects inclusive of the kinds of people who might benefit from the intervention being studied. As the philosopher and bioethicist Kirstin Borgerson writes, “Researchers face a burden of justification related to any idealizing elements added to trial design” because such idealizing elements limit the applicability of any findings (p. 293).Reference Borgerson5 Yet research questions often set up results that are at best agnostic as to applicability. Let us not mince words: A clinical study that is agnostic as to applicability is unethical. We must take seriously the fact that the clinical utility of results is implicit in the research question preceding them, and so ask questions that are meaningful to practicing clinicians and patients.
We have suggested that person-centered approaches generate the most useful clinical research because such approaches yield results that matter to patients. Yet few research questions are formed with a person-centered approach in mind. Consider that most research tests group difference or equivalence with respect to an outcome variable – tests that are implicit in the kinds of questions we ask, such as our hypothetical question concerning the superiority of antidepressants A or B.Reference Friedman, DeMets, Furberg, Granger and Reboussin6 This method may lead to statistically significant results concerning groups, but it has two major weaknesses from a person-centered approach. First, tests based on groups are uninformative about any one individual in the study, especially when the outcome variable is measured on a continuous scale. Second, the outcomes referenced in research questions and often used in group-comparison studies may be less relevant to participants or less instructive to clinicians.
Common clinical research questions are also limited by the use of traditional null hypothesis testing. Traditional null hypotheses are nonspecific, which means that the researcher’s default prediction is that the effects of two interventions will not differ. The alternative hypothesis is that they will differ, in a positive or negative direction.Reference Szucs and Ioannidis7 This leads to question of the form: Does treatment A have different outcomes than treatment B? Yet investigators often know enough to make a prediction about the direction of an effect, and it is this that they should be testing – not whether an intervention leads to any effect, but whether it leads to a predicted effect. Asking whether an intervention leads to any effect increases the likelihood of statistically significant findings but reduces the likelihood that the findings will have any clinical value. Another way to put this is that researchers should make one-tailed rather than two-tailed hypotheses. A one-tailed hypothesis predicts, for example, that A works better than B, whereas a two-tailed hypothesis predicts that A differs from B – it could be more or less effective. A poorly formed research question might involve a two-tailed hypothesis concerning group difference when there is sufficient prior research to enable a one-tailed prediction. The question in a two-tailed case invites a broader range of results that might be publishable because of statistical significance. But it could be a wasteful question if there is sufficient preexisting knowledge.
Badly formulated hypotheses stem from badly formed research questions, collectively fostering “wiggle room” in analysis and interpretation. A good research question lends itself to a clear prediction to be tested in unambiguous terms; where such clarity is lacking, there is an open door for spin – the selective interpretation of study results in a manner that favors a particular treatment. The methodology of spin is fairly simple: Data that do not justify a researcher’s conclusions are interpreted as though they do support these conclusions. This problem is rampant in studies. Consider an evaluation of recent RCTs of cardiovascular treatments published in top journals. Muhammad Shahzeb Khan and colleagues focused on RCTs finding no statistically significant treatment benefit for the primary outcome variable, and discovered that, in 67 percent of publications concerning such trials, authors nonetheless made statements in favor of the value of the treatment. Statements implying that the treatment worked appeared in 57 percent of abstracts and 54 percent of conclusions. Brazenly, such statements even appeared in 10 percent of the article titles.Reference Khan, Lateef and Siddiqi8 Figure 4.1 summarizes the findings. The authors conclude: “In reports of cardiovascular RCTs with statistically nonsignificant primary outcomes, investigators often manipulate the language of the report to detract from the neutral primary outcomes” (p. 1). This is spin in action, and it’s possible because of imprecise research questions and imprecise hypotheses. A clear prospective declaration of how hypotheses will be evaluated leaves authors less room to spin their interpretation of data.

Figure 4.1 Spin/overstatement by article section.
In the studies evaluated by Khan and colleagues, the absence of statistically significant findings is telling. But we want to be clear that when we say that research should be valuable, we do not require that it should produce statistically significant findings.Reference Freedman9 Such findings may indicate valuable results, but the idea that statistical significance in itself indicates value is purely a matter of convention. Statistical significance, on its own, has nothing to do with clinical usefulness. For one thing, prioritizing statistical significance means discarding null studies. These do not produce statistically significant outcomes, yet, when properly powered, such studies provide valuable information: As patients, we want to learn what treatments might help us, but we also want to know which treatments waste our time and money. Null studies are not necessarily failures; instead, they may be warnings of inefficacy. Some evidence suggests that a significant portion of medical treatments have no effect, or limited effect, on health outcomes – something we might have known by paying attention to null results and protecting them from spin.Reference Kaplan and Irvin10 Then too, and more fundamentally, statistical significance should not be conflated with value because it does not speak to whether research findings are important to anyone in need. When we ask whether there is statistically significant evidence for the efficacy of an intervention, that may well be all we ever find out, with the broader question of how to help struggling individuals left unaddressed.
Existing guidance on the framing of research questions, though in many ways useful, does not directly orient investigators toward high-utility projects with prosocial bias. One common model, PICOT, emphasizes broad methodological issues. PICOT refers to population, intervention, comparator, outcome, and timeframe (the duration of the study or intervention).Reference Guyatt, Rennie, Meade and Cook11 These are all, of course, important areas to reflect on in designing a research project, but nothing here encourages investigators to think about what outcomes actually matter to users of healthcare. Another model, FINER, asks whether research questions lead to feasible, interesting, novel, ethical, and relevant studies.Reference Hulley12 FINER perhaps gets us closer to useful research questions and outcomes, but we believe a better mnemonic would explicitly remind researchers to keep the social context of research in mind. Later in this chapter, we will argue for an alternative framework that overcomes the limits of PICOT and FINER by directly incorporating awareness of bias and social context into the formation of research questions.
Investigators’ Values and Biases
Scientists can readily describe their disciplinary backgrounds, areas of expertise, and programs of research, but often are unaware of how political and social values affect their work.Reference Potochnik13 Inevitably, however, these values are revealed by the actions we take. Values show themselves when investigators prioritize among research questions.Reference Elliott14 Values influence which questions we ask, which methods we use, which assumptions we make concerning data analysis and interpretation, and how we handle the limitations that arise during the course of studies.
Here is an example of revealed values: While there is no objective reason to study the experiences of men preferentially, medical research especially has a long history of focusing on men, resulting in less information on treatment efficacy among women.Reference Potochnik13 We agree with the philosopher of science Kevin C. Elliott, who argues that “values are not completely absent for any area of science” (p. 11 Chapter 1).Reference Elliott14 Instead of pretending otherwise, we should be realistic, transparently acknowledging our values, scrutinizing them, and demanding that they reflect social and ethical priorities.
We do not mean to suggest that this is easy, not least because researchers are not necessarily rewarded for doing high-value research. Research scientists are positively reinforced – with academic promotions, awards, funding, citations, and so on – for having a deep knowledge of relevant literature, mastering procedures, and productivity as measured by publication and citation metrics. But productivity in these terms is not meaningful to patients. Where valuable research is also risky – in the sense that it may not produce results that win the attention of top-journal editors – there is a good chance that no one will undertake it. There is something to be said for balancing high-risk, high-interest projects with lower-risk, lower-interest projects (p. 1530), but many scientists will go further, choosing caution at all times.Reference Kahn15
Even interesting research questions are not necessarily good ones. The FINER criteria suggest that researchers prioritize interest, yet what interests a person reflects their preferences, which are synonymous with biases.Reference Holland16 To be clear, we use the term bias descriptively rather than judgmentally; bias does not necessarily have positive or negative valence. The point is simply to recognize that researchers should scrutinize their own preferences. So here is a good question: Why does this research, in particular, interest me? Is it the most socially useful research I could be doing, or is it a tidy, short-term experiment selected for low risk – the kind of work that demonstrates productivity in the short term but is unlikely to contribute to valuable longer-term outcomes for patients?
Researchers can dig more deeply into the nature of their interests by thinking about the values underlying them. In a widely cited article on a possible “universal psychological structure of human values,” psychologists Shalom Schwartz and Wolfgang Bilsky define values as “(a) concepts or beliefs, (b) about desirable end states or behaviors … (c) that transcend specific situations, (d) guide selection or evaluation of behavior or events, and (e) are ordered by relative importance” (p. 551).Reference Schwartz and Bilsky17 Such values are reflected in individual and collective interests, and we suggest that ethical clinical research is oriented toward the latter, although it is not the case that individual interests need inevitably be sacrificed for the benefit of collective interests. Schwartz and Bilsky isolate seven motivational value domains, three of which emphasize individual interests (enjoyment, achievement, self-direction). Another three emphasize collective interests: These are prosocial values, focused on concern for others’ welfare; security values, which promote safety, health, and stability; and values of conformity, which prioritize compliance with social expectations. Finally, there are values that reflect both individual and collective interests, which Schwartz and Bilsky put in terms of maturity – wisdom gained through experience.Reference Schwartz and Bilsky17 Notably, even individual interests will often depend on social context. Achievement and self-direction, for example, are typically contingent on others’ approval. For example, to become independent, scientific investigators must first demonstrate their preparation to supervisory committees and more senior investigators.
Schwartz and Bilsky’s framework has the virtue of distinguishing personal from social values. Researchers should have these same distinctions in mind. It is not that our individual interests are never relevant, but they should be understood as distinct from social interests because only one of these kinds of interests is subject to moral injunctions. Realizing personal interests is all well and good, but as investigators our duty lies in pursuing the social good.
Consequences of Specialization
The progressive specialization of the sciences is a contributor to researcher biases.Reference Casadevall and Fang18 By specialization, we mean the narrow expertise associated with mastery of one’s field. There is too much knowledge for any one person to possess; across disciplines, within the sciences and beyond them, even the most expert among us must limit ourselves to a manageable range of information and skills. This is in many ways beneficial: Specialization allows for efficient distribution of labor and ever-deeper understanding in many domains.
But specialization also reflects and enables the contraction of interests and values. The exclusivity of expertise comes at the cost of isolation from other fields that might inform research questions, goals, and methods. The result can be a disciplinary echo chamber in which creativity and challenge are no longer tolerated and specialists lose the capacity to speak to nonspecialists or understand their interests. Under such conditions, it is easy to ask questions for the benefit only of one’s peers, not for that of society at large. Consider that specialty journals are more likely to publish research results that support the sorts of interventions already favored by the professionals subscribing to the journal.Reference Luty, Arokiadass, Easow and Anapreddy19 Say a study demonstrates that a cognitive-behavioral intervention reduces blood glucose more effectively among people with Type-2 diabetes than does a well-known drug. Such a study is more likely to find a home in a behavioral medicine journal than an internal medicine journal, even though the result is highly relevant to both specialties.
When we ask ourselves what research questions interest us – and why they do – we should take care to ensure that we are not allowing ourselves to be misled by our commitments to narrow fields of study. The literature review that precedes a study should be capacious, seeking input from various disciplines, to determine whether our questions have already been adequately answered by others. Findings outside one’s discipline can also inspire better questions – for instance, maybe an internal-medicine specialist should seek to compare effects of a drug treatment with those of a behavior intervention, rather than another drug treatment. Or maybe sociological findings point to sources of disease – and also remedy – in social determinants of health, pointing to the worth of studying social rather than medical interventions.
We don’t mean to suggest that specialization ought to be avoided. That would be both impossible and undesirable. We need to promote the benefits of specialization while also compensating for its disadvantages by encouraging scientists to recognize and mitigate the biases that specialization inevitably creates. One key way of addressing such biases involves cultivating equipoise.
Research Questions Arise in Social and Personal Contexts
Research is a social enterprise, in at least two ways that we have hinted at so far in this chapter. Research is a social enterprise because the questions researchers choose to answer reflect their embeddedness in social realities. And research is a social enterprise because it can, and should, have beneficial effects on people’s lives. It can also have negative effects, as when poorly designed and reported research leads to false hope, often secured at great cost.
We encourage investigator self-reflection in order to unearth biases and ask good questions. So in the course of self-reflection, keep the following in mind. If you are a healthcare researcher, you were educated alongside your particular cohorts in your particular schools. You have learned both skills and ideas about what matters from teachers, mentors, laboratory groups, professional colleagues, and fellow members of professional societies. Then too, as a clinical researcher, you depend on the availability and willingness of human participants and on input from colleagues – sometimes anonymous input. Your research likely involves collaboration with students and colleagues and is influenced by journal peer reviewers, Institutional Review Boards (IRBs), and grant reviewers. Your relationships with journal editors and funding agency officials influence your ability to conduct research and disseminate the results.Reference Lakatos and Musgrave20 These factors add up to a complex social context that influences researchers’ values, interests, methods, and capacity for objectivity. When a researcher asks whether a possible project has social value for clinicians, patients, and communities, their answer will inevitably be shaped by their background and by the incentives they face.Reference DuBois and Antes21
Social context helps to determine what sort of questions are askable, as well as how research projects will be received. Consider the experience of the addiction researchers Mark and Linda Sobell, who faced serious consequences for reporting results that challenged dogma. The Sobells asked a forbidden question: How much alcohol can alcohol abusers safely drink? In the 1970s, when the Sobells undertook this research, the dominant view in their field was that people who abused alcohol could not safely drink any amount: Abstinence was the only way to control intake. The Sobells conducted an RCT comparing abstinence-based with controlled-drinking treatment and reported superior short- and longer-term outcomes for those in the controlled-drinking treatment group.Reference Sobell and Sobell22, Reference Sobell and Sobell23 These results were at such odds with the dominant view that the Sobells were charged with fraud.Reference Pendery, Maltzman and West24 An independent investigation found no evidence to support the fraud allegation.Reference Norman25 Nonetheless, the charges caused havoc for the Sobells.Reference Marlatt26
The Sobells’ tale speaks to the challenges of pursuing novel research questions with social impact. Those who undertake narrow, low-risk projects will likely avoid a certain amount of blowback. But they may forego the opportunity to achieve useful results, and worse still occupy resources that could be directed toward worthwhile outcomes. Meanwhile, the Sobells have been vindicated, at least in the sense that it is now much easier to ask challenging questions about alcohol addiction and healthy use. Today, where once there was stifling uniformity, there is robust and ongoing debate. For instance, one 2023 study from the Canadian Centre on Substance Use and Addiction recommended having no more than two drinks per week, while another study from the same year found that light or moderate alcohol consumption (defined as up to 14 drinks per week) is associated with fewer major cardiovascular events than abstinence or very low alcohol consumption.Reference Paradis, Butt and Shield27, Reference Mezue, Osborne and Abohashem28 In science, conflicting results should be expected; part of our mission as a scientific community is to understand why results do not always converge. We would do well to foster social contexts in which a well-executed study can be taken seriously, even if it produces results that defy expectations.
This is important to keep in mind, as personal experiences, values, and beliefs of the researcher may influence how their work is received, as well as which questions matter to them. A personally relevant research question may be especially fetching. A scientist who has experienced depressive episodes may be especially attracted to the study of depression.Reference Devendorf29 It is also not unusual to find an African American scientist studying health disparities affecting African Americans. But research by and about minoritized people is often devalued, making this an important but potentially thankless task.Reference Buchanan, Perez, Prinstein and Thurston30 Researchers should know, then, that there are potential benefits of studying issues relevant to our own identities or experiences. Personal insights may shape the formulation of a study question and design, resulting in more robust research.Reference Ayoub and Rose31
That said, there are potential risks of studying personally relevant topics, known in derogatory terms as me-search. The use of personal knowledge might be seen as contrary to some ethical standards, such as one that states that judgments should be based on “established scientific and professional knowledge.”32 Research consumers tend to distrust personally relevant research, suggesting that studies of this sort are thought to be subject to bias. Both lay and professional consumers have demonstrated less favorable reactions to self-relevant research,Reference Altenmüller, Lange and Gollwitzer33, Reference Rios and Roth34 although this effect is moderated by lay consumers’ favorable or unfavorable attitudes toward the research topic.Reference Rios and Roth34
Bias versus Equipoise
Interest in a particular research problem is inspiring.Reference Elliott14 The desire to resolve a scientific or technical challenge sparks the imagination and helps the researcher persist in spite of thorny problems and dead-ends. Specialization is helpful here too, increasing the likelihood that researchers will have the tools they need to overcome challenges. But interest and specialization can also blind the researcher to alternative questions, goals, theories, hypotheses, and methodologies, resulting in commitment to a particular outcome and, potentially, projects influenced by confirmation bias. We suspect that most, if not all, of us fall into this trap. The challenge lies in being aware of the limitations of our own objectivity.
From the very start of the research process – that is, from the moment of question formation – equipoise helps us see past the blinders reinforcing confirmation bias. Recall that equipoise is the stance embodied by an investigator who is uncertain about the outcome of their inquiries. Equipoise demands that research questions be genuine questions: There must be real doubt as to whether this or that treatment, test, or preventive program works, or works better than some alternative. Ethicists argue that if researchers enter into a study convinced of what the results will be, they should invest their energies elsewhere.Reference Lilford and Jackson35–Reference Lilford and Jackson37 Equipoise, in other words, entails neutrality.
Because equipoise inheres in the formation of research questions, it is a prerequisite for studies of all kinds – experimental and observational. Where there is uncertainty about the efficacy of a treatment, it may be appropriate to assign study participants to treatment or control groups using a random process and to collect and analyze the resulting data impartially. Where there are valid questions to be asked about real-world treatment effects, those effects should be assessed – again impartially. Any time an investigator considers a research question they might pursue, they should undertake an honest and searching appraisal of their own equipoise.
Encouraging Valuable Research: Community Engagement and Question-Formation Frameworks
Some existing structures aim to facilitate reflection on how values and ethics are embodied in research. The National Science Foundation requires that grant proposals articulate the broader effects of the proposed research on society.38 And IRB applications require that researchers justify research questions in terms of social significance. Yet these mechanisms are very limited, not least because they are driven by top-down regulations: external panels, governed by law, judge proposed research questions, and programs for compliance with ethical principles.39
But what about internal reflection on researcher values?Reference Berling, McLeskey, O’Rourke and Pennock40 In ensuring that we are asking the right questions, we shouldn’t rely only on outside oversight. We should know that we are making good choices – ethical, socially valuable choices. We can and should ask ourselves whether the work we do aligns with the positive character traits associated with science – traits such as curiosity, honesty, objectivity, humility, and perseverance.Reference Pennock and O’Rourke41 Can I foresee being sufficiently curious about the problem that I will report the results with integrity, whatever they may be, without selectivity or spin, and that I will pursue my “project to any depth necessary”?Reference Kahn15 And we can and should reflect on whether the research questions we have in mind have significant value to patients, clinicians, and communities.
To this end, there is much to learn from the well-established approach of community-based participatory research (CBPR). This is person-centered, recognizes social determinants of health, and utilizes the unique knowledge of communities, community members, and researchers in the co-creation of research.Reference Collins, Clifasefi and Stanton42 Unlike traditional, investigator-driven research, CBPR invites relevant communities into the process of designing research questions and procedures. In this way, the research questions that matter to communities can rise to the surface, and likely study participants can be involved in ensuring that research meets ethical standards. Such community input can be especially valuable in establishing standards of informed consent.
BASES: A Social-Values Framework for Research Questions
Let us return to the case that opened the chapter – how to ask a good question about the utility of antidepressant medications. Our original question was as follows: Is antidepressant medication A or B associated with greater reductions in depressive symptoms? This question is typical of an RCT focused on the comparative efficacy. However, when a group of researchers asked hundreds of patients with depression for their top research question, they came up with something quite different: “To what extent are antidepressant medications and psychotherapies associated with long-term recovery among those with depressive disorders?”Reference Breault, Rittenbach and Hartle43
In Table 4.1, we compare these two research questions using three sets of criteria: PICOT, FINER, and our own system, called BASES. In this comparison, it is evident that the FINER criteria lead to a more critical evaluation of the kinds of questions conducive to RCTs than do the PICOT criteria. But much is left out of both frameworks, especially when it comes to the biases and social values influencing the formation of research questions. Like FINER and PICOT, BASES is a mnemonic – a reminder to which researchers can turn when assessing projects. We ask investigators to examine biases inherent in research question, with awareness of their consequences; to take stock of the social contexts of research and strive to maintain equilibrium among the interests of investigator, patients, clinicians, and communities; and to assess honestly the specificity of their research questions, to prevent manipulative analysis. The BASES mnemonic reminds researchers to think about the basics: the values, beliefs, and priorities reflected in research questions; the reality that science is a social enterprise with many constituents having diverse needs; and the methodologies that ensure useful research results.
Table 4.1 Comparison of research question–formation frameworks: PICOT, FINER and BASES
| Criteria | Research Question | |
|---|---|---|
| (1) Is antidepressant medication (ADM) A or B associated with greater reductions in symptoms of major depressive disorder (MDD)? | (2) To what extent are ADMs and psychotherapies associated with long-term recovery among those with depressive disorders? | |
| PICOT: population | Highly selected sample meeting criteria for MDD without comorbidities | Patients with depressive disorders |
| PICOT: intervention | ADM A | Two major classes of mental health treatment |
| PICOT: comparator | ADM B | N/A |
| PICOT: outcome | Change in experience of depressive symptoms | Recovery |
| PICOT: timeframe | Short term (weeks) | Long term (years) |
| FINER: feasible | Probably, if the highly selective sample can be recruited | Probably, if relevant databases are accessible |
| FINER: interesting | Questionably | Yes, to researchers, clinicians, and patients |
| FINER: novel | Questionably | Possibly |
| FINER: ethical | Questionably | Yes |
| FINER: relevant | Questionably | Real-world relevance |
| BASES: biases | Invites investigator- and industry-centered biases | Biased toward patient interests and practice relevance |
| BASES: awareness | Unexamined | High external validity with costs to internal validity |
| BASES: social | Unspecified | Informed by priorities of the community of patients with depression; uses data from actual practice |
| BASES: equilibrium | Unbalanced in favor of investigator interest in researcher productivity | Balances interests of investigators, patients and their community, and clinicians |
| BASES: specificity | Unexamined | Mitigates consequences of broad questions and imprecise analyses |
Conclusions
Good research is useful research, and useful begins with questions oriented toward utility (see Box 4.1). Well-meaning, committed scientists will disagree about which questions ultimately are the most useful, but we believe there should be broad agreement as to what sorts of factors are relevant to utility. Above all, the research question should be relevant to the clinical process. The question should enable a project that speaks to the genuine needs of the population that ostensibly stands to benefit. The question should delineate a project of unambiguous scope, so that there is one clear primary outcome variable to be evaluated, and its social relevance is obvious.
Increase investment in and emphasis on defining the research questions.
Highlight the social context of research.
Accentuate the importance of investigator self-awareness in considering the research question.
Use the BASES mnemonic to encourage formation of well-designed, socially useful research questions.
The choice and conceptualization of research questions are not only crucial, but they are also subject to bias. In this chapter, we have tried to sensitize readers to this fact not because we have some magic remedy that will eliminate bias but because bias is inevitable. The challenge is to recognize bias and to compensate when the bias when leads to inaccurate conclusions. Ethical research, as we saw in Chapter 3, does not merely avoid harm. It also does good for those in need. With this in mind, our BASES framework is designed not just to ferret out self-interested bias but to promote other-interested bias. A good research question isn’t free of bias; it is biased toward methodological rigor and toward the priorities of patients and communities.
