1. Introduction
Quantum field theory (QFT) provides a general framework for formulating physical theories, replacing predecessors with similar scope, such as classical Lagrangian mechanics. Physicists have developed successful QFTs for the weak, strong, and electromagnetic forces, but what are the prospects for gravity? Early efforts to formulate a QFT for gravity showed that it lacked a feature then taken as necessary for a sensible QFT: perturbative renormalizability. For theories with this property, such as QED, the infinities that arise in calculating quantities through a perturbative expansion around the free field theory can be tamed by reparametrizing a finite number of “bare” coupling parameters appearing in the Lagrangian. The renormalized theory then yields predictions regarding diverse physical processes. A perturbative expansion of general relativity (GR) differs strikingly from QED (and theories of the other forces), however, because of the dimension of its coupling constant. Heuristic “powercounting” arguments link the dimension of the coupling constant(s) to the ultraviolet behavior of the theory and suggest that no finite reparametrization will eliminate all of GR’s ultraviolet infinities. These arguments have been supplemented by rigorous proofs that gravity fails to be perturbatively renormalizable.Footnote ^{1}
Yet these results no longer present a roadblock, given the dramatic reversal of fortune nonrenormalizable theories have experienced. This new perspective follows from the use of renormalization group techniques to clarify how different terms in a Lagrangian behave under changes of scale. Predictions can still be extracted from (some) nonrenormalizable Lagrangians, whose lowenergy properties can be fully characterized in terms of a finite set of parameters.Footnote ^{2} Physicists routinely construct effective field theories (EFTs) designed to mimic the lowenergy physics of more fundamental Lagrangians. The finite set of parameters sufficient to specify lowenergy behavior (e.g., coupling constants and masses) can then be determined experimentally, leading to a variety of further predictions, just as in the case of renormalizable theories. The presence of nonrenormalizable term(s) in the Lagrangian, rather than indicating a failure, merely delimits the domain of applicability of the EFT. Furthermore, many distinct candidates for a (more) fundamental Lagrangian may generate the same lowenergy EFT. When this is the case, the EFT is insensitive to contrasts in the descriptions of higherenergy physics they provide. As Ruetsche (Reference Ruetsche, French and Saatsi2020) succinctly puts it, “T is merely effective just in case T, while not itself a complete and accurate account of physical reality, approximates that account whatever it is (!) within a restricted domain of application” (298, original emphasis).
The EFT approach promises to justify our confidence in lowenergy theories while remaining agnostic about physics at higherenergy scales. Making good on this promise requires some assurance that all reasonable candidates for a (more) fundamental theory flow to a lowenergy EFT. The cases where physicists have been able to prove that the renormalization group flow has the desired properties share two features: locality and naturalness. Locality is the requirement that the Lagrangian depends on fields and their derivatives at a point. Although naturalness has been used in a variety of distinct senses, Williams (Reference Williams2015) argues convincingly that these can all be seen as stemming from the concept of autonomy of scales: the expectation that physics at lowenergy scales decouples from physics at higher energies. If naturalness holds, the dynamics within the relevant domain are insensitive to the details of physics at higherenergy scales. Although often left unstated, there are some minimal structures required to set up an EFT, such as a method for demarcating high from lowenergy degrees of freedom. At a minimum, this requires enough spacetime structure to define a useful notion of energy and a sufficiently strict division between high and low energies, the latter of which falls within the domain of the EFT. We will discuss these issues further (section 3), but these brief comments are sufficient to illustrate that the criteria for a (more) fundamental theory to be well approximated by a lowenergy EFT are much less restrictive than those imposed by demanding a renormalizable QFT. This suggests a very different take on the “problem of quantum gravity”: To what extent can we treat classical general relativity as the lowenergy EFT of an unknown quantum theory?
Indeed, EFT methods have been successfully applied to a variety of problems in gravitational physics over the last two decades (see, e.g., Burgess Reference Burgess2004; Donoghue Reference Donoghue2012, for reviews). However, a careful analysis of the domains in which EFT methods work for gravity highlights the exceptional nature of these cases. From successful applications, we learn that EFT methods work for models that can be treated as “nearly” static or (asymptotically) flat, but they do not work in a variety of other situations routinely described with classical gravity. In attempting to construct an EFT for dynamically evolving models in cosmology, for example, selfconsistency problems arise in assessing whether one has actually expanded around a solution (see, in particular, Bianchi and Rovelli (Reference Bianchi and Rovelli2010) and further discussion in section 3). The more general question of whether all gravitational models can be treated using EFT methods remains open.
One response takes the applicability of EFT methods as a new criterion of adequacy: if we cannot construct an EFT, then we have no way of understanding how to treat a classical solution as an approximation to a more fundamental quantum model. Although looking for keys lost at night under the lamppost is often a good strategy, this response seems to foreclose the possibility of a further generalization, like the move from renormalizable QFTs to EFTs. The need for further generalization would not be a surprise: research programs in quantum gravity have had to replace the spacetime structures employed in formulating conventional QFT—such as Poincaré symmetries and the causal structure of Minkowski spacetime—with structures definable in generic curved spacetimes. Here, we will assess the assumptions of the EFT framework and argue that they also impose constraints that gravity might force us to break. The focal point for our discussion is the cosmological constant problem (CCP), which we take to signal the internal breakdown of EFT methods for gravity, particularly over cosmic distance scales in nearFriedmann–Lemaître–Robertson–Walker (FLRW) spacetimes.
Suppose that (i) we treat classical GR as the lowestorder term in an EFT, whose action $S^{\text{eff}}$ in principle follows from a full theory of quantum gravity via integrating out higherenergy modes from the “true” action S. We assume that the Planck mass is the energy scale used to separate high from lowenergy degrees of freedom, with $S^{\text{eff}}$ only concerned with the latter. If we assume that (ii) this EFT is stable and autonomous with respect to higherenergy physics and is able to reproduce all effects of classical GR, trouble arises as a result of the relevant terms in the effective action:Footnote ^{3}
We have made explicit the nature of the EFT expansion for the gravitational terms in equation (2). The first two terms are the familiar Einstein–Hilbert action terms. The ${\mathcal{O}}_i^{[2n+4]}$ are higherorder terms in the gravitational Lagrangian with mass dimension $2n+4$ , constructed from the Riemann and Ricci tensors and Ricci scalar and subject to the symmetry constraints of GR. The terms are ordered by their massenergy dimension; constants $c_i$ are dimensionless coupling strengths, suppressed by the explicit powers of the Planck mass. At low energies relative to the Planck mass, higherdimension terms will be heavily suppressed. Note that the cosmological constant term $\Lambda$ has mass dimension 4. A similar expansion in the matter Lagrangian leads to a full EFT treatment of gravity and matter.
If we assume (iii) that the couplings in this EFT vary under renormalization group flow, we run into a problem. Both the $\Lambda$ term and the first term in the matter EFT expansion $\langle \rho \rangle$ have mass dimension 4 and are relevant parameters; they fail to be “natural” in that they receive contributions proportional to $m^4$ under renormalization group flow from one energy scale m down to another. Even if we stipulate that $\Lambda$ has a small value in an effective action $S'$ at some highenergy scale, this will not be true for the action $S^{\text{eff}}$ obtained at a lower scale via the renormalization group as a result of to radiative corrections.Footnote ^{4} When we write the EFT action, we also assume (iv) that the zeropoint energies minimally couple to gravity. The zeropoint energies from the quantum fields then take the same constant form in the action as $\Lambda$ , so these terms should be grouped together. If this EFT applies everywhere below the cutoff scale, then this $\Lambda + \langle \rho \rangle$ term would have various observable effects (described later in the article). Given the quartic dependence of both on the massenergy scale m, even integrating from relatively lowenergy scales leads to a dramatic conflict—“the worst prediction in the history of physics”—with observational bounds.
This is, in a nutshell, the CCP.Footnote ^{5} What should we make of it? Here, we want to draw a contrast between three different responses and, in particular, explore the third:

1. Anthropic parameter fixing: Accept all previously noted assumptions except for (iii), and reconsider how to think of parameters like $\Lambda$ (along with other apparently finely tuned aspects of lowenergy physics). Specifically, we should take the observed values as “anthropically selected” from an ensemble of possible values. (This strikes us as an act of desperation.)

2. Modify dynamics; keep EFT: Accept assumptions (i)–(iii), and reject (iv). Although the EFT concepts apply, we have overlooked something that will change the problematic scaling behavior (e.g., supersymmetry, change in the number of dimensions of spacetime, modified gravity, etc.). Thus, we should modify the particular details of gravitational dynamics so that the EFT framework applies.Footnote ^{6}

3. Reject EFT: The argument is a reductio ad absurdum of the combination of assumptions (i) and (ii), namely, that we can treat all of classical GR as a lowenergy EFT. The resulting failure of naturalness signals an internal inconsistency with the application of EFT methods to some specific domains of gravitational physics.
In pursuing the third line, we step back to take a look at the assumptions required to set up an EFT for gravity.Footnote ^{7} We find that for spacetimes where one would expect $\Lambda$ to have a significant effect, we cannot set up a welldefined separation of energy scales. The failure of naturalness may internally signal the limits of applicability of the EFT framework. When we assume that theories are natural, we assume that EFT methods apply and that the effects of highenergy physics on lowenergy Lagrangians are relegated to fixing the values of coupling constants.Footnote ^{8}
Wallace (Reference Wallace2019) argues that far from being a technical requirement relevant only to highenergy physics, naturalness underwrites how we understand intertheoretic relationships like emergence and reduction throughout physics. In Wallace’s account, naturalness plays an essential role in deriving emergent dynamics for macroscopic systems from more fundamental theories.Footnote ^{9} We argue that the failure of naturalness in the CCP may signal the limits of applicability of the EFT framework. The EFT approach is overstated if taken to be a precondition for the possibility of physical theorizing, as one reading of Wallace (Reference Wallace2019) suggests. Although we acknowledge the wonderful utility of decoupling, there is no necessity that nature cooperates with our fondness for EFTs. By rejecting the global applicability of the EFT framework, we endorse pursuing “unnatural” solutions.
The article proceeds as follows. Section 2 more carefully states the CCP within the EFT framework. We argue that there is no direct path to the CCP in terms of a conflict of differing measurements of $\Lambda$ from different observations. Within the Standard Model, there is no evidential support for any particular value of vacuum energy density. Thus, the problem arises in the context of treating GR as an EFT and using the renormalization group to understand the scaling behavior of $\Lambda$ . Yet unlike the scaling of other terms in the effective action, a shift in the value of $\Lambda$ threatens to undermine assumptions about spacetime implicit in this way of treating the problem. Section 3 considers this question from a different perspective. We make explicit the spacetime structure that standard EFT techniques depend on, then examine the ways in which those spacetime assumptions can be relaxed for applications of GR as an EFT. The relaxed assumptions allow for EFT methods to be applied in special cases where the spacetime is nearly static or asymptotically flat. But the CCP arises when considering largescale features of the universe, and EFT methods break down in this regime. Thus, it should not be surprising that EFT methods fail for understanding $\Lambda$ . The failure of decoupling serves as an internal signal that the approach fails, and the limitations of EFTs support this conclusion from an external perspective. In section 4, we discuss some approaches to quantum cosmology that fall outside the EFT framework. The purpose of this section is to illustrate that the EFT framework, decoupling, and naturalness are not necessary preconditions for constructing models in physics. Finally, section 5 returns to the question of naturalness and its necessity for doing physics.
2. The cosmological constant problem
We characterized the CCP as arising from treating GR as a lowenergy EFT. But is there a more direct way of posing the CCP? For example, if we have direct evidence that vacuum energy $\langle \rho \rangle$ exists, and it should contribute to the Einstein field equations as an effective $\Lambda$ term, doesn’t this immediately lead to a conflict—that different ways of inferring the same quantity lead to wildly different results? We deal with this question in section 2.1, concluding that there is no independent evidence for $\langle \rho \rangle$ from the point of view of QFT. We therefore have a problem with the EFT formalism when extended globally, as we indicate in section 2.2.
2.1 No conflicting measurements of $\Lambda$
Consider the effective Einstein–Hilbert action coupled to matter in the form of quantum fields (eq. [1]). The stressenergy tensor for matter fields will include a vacuum energy density playing an analogous role in the Einstein field equations to the cosmological constant. In semiclassical form, this looks like
where the expectation value is taken about the global vacuum state. Because both $\Lambda$ and $\langle \rho \rangle$ contribute as constant multiples of the metric, we observe only the consequences of their combination,
If we have direct evidence for the presence and value of $\langle \rho \rangle$ in $\mathcal{L}_m$ , then it should contribute to $\Lambda _{\text{obs}}$ along with the $\Lambda$ term from the Einstein–Hilbert action. This apparently allows for a direct observational comparison: measure the total energy density in a region, including $\langle \rho \rangle$ , and compare it to the curvature revealed through cosmological observations. However, any such “prediction” of $\langle \rho \rangle$ has to resolve ambiguities associated with composite operators (polynomials of field operators) in interacting QFTs. Here, we will focus, in particular, on ambiguities regarding the stressenergy tensor.
In perturbative QFT, the field operators appearing in a Lagrangian have no direct physical significance: we can write the Lagrangian in terms of new fields. When dealing with renormalizable QFTs, the only possible redefinitions are linear transformations, whereas EFTs allow for integer polynomials of $\phi$ and a finite number of derivatives. Such field redefinitions do not change the Smatrix elements. Physicists have taken advantage of this freedom to remove divergences by expressing the Lagrangian in terms of renormalized fields.Footnote ^{10} Further natural constraints are imposed to clarify the physical meaning of some operators; for example, in the case of a conserved current $J^{\mu }$ associated with an internal symmetry, there is no ambiguity in defining the operator (Collins Reference Collins1985, §6.6). The stressenergy tensor $T_{ab}$ includes products of field operators. For any of the methods introduced to handle these products, we can ask whether they rule out a field redefinition that has the following impact on the stressenergy tensor: $T'_{ab} = c_0T_{ab} + c_1\eta _{ab}\bf{I}$ (where $\bf{I}$ is the identity operator). Redefinitions in the EFT approach are typically required to preserve Smatrix elements and npoint functions. It turns out that preservation of the Smatrix does not constrain the value of $c_1$ because the total energy cancels out in calculations of the Smatrix elements. Thus, it does not appear that QFT has the resources to predict an unambiguous value for vacuum energy density.
Nevertheless, articles on the cosmological constant abound with claims that QFT predicts a value of vacuum energy density. For the sake of argument, consider the current best estimates,Footnote ^{11} $\langle \rho \rangle \simeq 2 \times 10^8{\rm{GeV}}^4$ , differing by over 50 orders of magnitude from the value of $\Lambda _{\text{obs}}/\kappa$ fixed by cosmological observations, $\simeq 10^{47}{\rm{GeV}}^4$ . We need not appeal to cosmology: even solar system dynamic constrains $\Lambda _{\text{obs}}/\kappa$ to be $\approx 40$ orders of magnitude smaller than $\langle \rho \rangle$ .
The attempt to directly relate gravitational measurements of $\Lambda$ to the vacuum energy density requires two assumptions. The first is that $\langle \rho \rangle$ gravitates. One class of the modified dynamics approaches to solving the CCP rejects this. By introducing a mechanism or modification that decouples $\langle \rho \rangle$ from gravity, one can treat $\Lambda _{\text{obs}}=\Lambda$ as a free parameter, determined by observation. For now, we will assume that vacuum energy, if real, obeys the equivalence principle like all other forms of energy. If not, then we still do not arrive at conflicting measurements of the same quantity; in that case, $\langle \rho \rangle$ does not contribute to $\Lambda$ . The second assumption is that the vacuum expectation value of energy density is real, that is, not an artifact of the QFT formalism on Minkowski spacetime. Its value must be determined independently of considerations of gravity; otherwise, the input $\langle \rho \rangle$ is unknown. Do we have direct evidence for the reality (and magnitude) of $\langle \rho \rangle$ from the Standard Model? How seriously should we take predictions of its value, such as the one cited earlier?
The standard response is to claim that either the Lamb shift or the Casimir effect provides direct evidence of the presence of vacuum energy density. However, both effects, at best, provide evidence for the presence of local fluctuations in vacuum energy, not a global expectation value (Koberinski, Reference Koberinski, Wüthrich, Le Bihan and Hugget2021a). Typically, the Casimir effect, described as due to impenetrable plates limiting vacuum fluctuation modes, is taken as the strongest evidence in favor of $\langle \rho \rangle$ . The plates constrain the production of virtual photons in the vacuum—only photons with wavelengths that are an integer multiple of the plate spacing can be created between the plates. This creates a pressure differential because “more” virtual photons can interact with the outside of the plates than the space in between, leading to a small attractive force. However, alternative formulations characterize it as a residual van der Waals force between the atoms in the plates; Jaffe (Reference Jaffe2005) has explicitly performed an alternative calculation in which the effect is due to loop corrections in the relativistic forces between the material plates. This calculation generalizes more readily to other plate geometries, and unlike a pure vacuum pressure, it goes to zero when the QED coupling $\alpha$ is taken to zero. The original explanation in terms of differential vacuum pressure may be a successful shorthand for the more realistic explanation, but it seems to be little more than that.
For the Lamb shift, it is even clearer that this is nothing more than radiative corrections to a firstorder QED calculation. The Lamb shift is a small difference in the $2s$ and $2p$ orbital energy levels of the hydrogen atom, which are equal if one uses the Dirac equation. From QED, we see the effect as a oneloop correction to the interaction between the proton and electron in a hydrogen atom. Loop corrections to interactions are not the same as vacuum energy, even if they are sometimes fancifully described as virtual particles from the vacuum interacting with the external particles. At best, these should be thought of as quantum fluctuations about the vacuum state. In terms of Feynman diagrams, vacuum energy is represented as a sum of bubble diagrams—diagrams with no external legs. These diagrams factor out of any npoint function and therefore play no role in predictions based on perturbation theory.
To summarize the arguments of this section, we claim that $\langle \rho \rangle$ plays no role in the empirical success of the Standard Model and that, furthermore, the Standard Model provides no prediction of its value. We cannot generate a direct conflict between different ways of measuring $\Lambda$ . Instead, we must deal directly with the principles of EFT for cosmological spacetimes.
2.2 The cosmological constant in effective field theory
The fundamental quantities of a QFT are the correlation functions among a set of operators $\{ O_i \}$ acting on the vacuum state, calculated based on the action $S = \int{\rm{d}}^4 x{\mathcal{L}}(\phi )$ for a specific field theory (schematically):
The EFT approach deals directly with these quantities, explicitly indexing them to a particular energy scale. Because the action is now defined in terms of effective degrees of freedom at that energy scale, we think of it as an effective action for that domain.Footnote ^{12} This effective action can be constructed “top down” from an existing highenergy theory, such as by systematically integrating out the highenergy degrees of freedom, given a cutoff scale $\Gamma$ . This can be described more abstractly as the action of the renormalization (semi)group on the space of theories, that is, actions at specific energy scales $\{S(\Gamma )\}$ . This group generates a trajectory relating actions at different scales, and in the best case, trajectories through the infinitedimensional space of theories $\{S(\Gamma )\}$ flow to a finitedimensional subspace.
EFTs constructed “top down” in this fashion, from a given highenergy theory, provably yield lowenergy observables compatible with the results of the full theory. We can also develop an EFT “bottom up”—proposing a Lagrangian ${\mathcal{L}}_{\text{eff}}$ with appropriate symmetries and fields, and including all possible couplings consistent with those symmetries, even though it is not obtained from a known highenergy theory. A separation of scales is still needed in the bottomup approach. Obviously, one cannot then prove directly that the EFT will approach the (unknown) highenergy theory. The absence of the highenergy theory means that in applying the EFT framework, we must make substantive assumptions about an unknown future theory. One of these assumptions is clearly locality, namely, that ${\mathcal{L}}(\phi _i)$ depends on the fields $\phi _i$ and their Taylor expansions at a point.
In the EFT framework, we can classify the behavior of the vacuum energy density under renormalization group flow. To see why decoupling fails for a vacuum energy density term, we must first explain the behavior of different terms in the Lagrangian. In a spacetime with four dimensions,Footnote ^{13} couplings with positive mass dimension indicate relevant parameters that increase in magnitude in the EFT as the cutoff is taken to higher energies. Renormalizable theories contain these and marginal parameters in the Lagrangian, the latter characterized by dimensionless couplings, which therefore do not contain powers of the cutoff. Irrelevant terms have coupling constants with dimension of negative powers of mass. Decoupling applies to the marginal and irrelevant parameters; relevant terms appear to couple sensitively to the highenergy cutoff. A sensitive dependence on the cutoff signals that relevant terms are sensitive to the scales at which new physics comes in.Footnote ^{14}
The vacuum energy density $\langle \rho \rangle$ and $\Lambda$ terms exhibit the most problematic scaling behavior: they are relevant parameters, and they scale with the fourth power of the cutoff. The Standard Model is well confirmed up to the energy scales probed so far at the Large Hadron Collider (LHC), so the cutoff for an effective version of the Standard Model must be at least $\gtrsim 1$ TeV. One can arrange a delicate cancellation between the scaling from vacuum energy density plus quantum corrections to GR and the bare $\Lambda$ term: $\Lambda _{\text{obs}} = \mathcal{O}(\Gamma ^4)  \mathcal{O}(\Gamma ^4) \approx 0$ , but this seems ad hoc. Further, it is unstable against radiative corrections to the vacuum energy density obtained when the higherorder terms in a perturbative expansion are included.Footnote ^{15} Because we do not observe $\langle \rho \rangle$ directly, this is not an empirical problem. It instead indicates a breakdown of decoupling within the EFT framework. The behavior of $\langle \rho \rangle$ under renormalization group flow suggests that vacuum energy density is sensitive to highenergy physics. If the local, relevant $\Lambda + \langle \rho \rangle$ term from equation (1) is extrapolated to provide a contribution to the observed cosmological constant, this would indicate a highly sensitive coupling between highenergy physics and the deep infrared (IR) in cosmology.
There is a further challenge regarding how to understand this scaling behavior: Is there a selfconsistent choice of background metric (and other structures) we can use to describe the renormalization group flow? Suppose that we start with an action defined at a specific energy scale $E_h$ , low enough so that quantum gravity effects can be neglected and the metric $g_{ab}^1$ is a solution of classical GR. Implicitly relying on this metric, we can integrate out the highenergy modes to obtain an effective action at a lowerenergy scale $E_l$ . Yet the appropriate metric cannot still be $g_{ab}^1$ at this lower scale because the scaling properties described previously lead to a nonzero $\Lambda$ contribution. Even for relatively small changes of scale, this term will dominate, such that the action at the lower scale is defined with respect to a different classical metric, $g_{ab}^2$ . It is common to claim that generic curved spacetimes “look enough like Minkowski locally,” such that the tools developed in flat space can be used. But incorporating the scaling of vacuum energy leads from an initial spacetime $g_{ab}^1$ to one with strikingly different global properties—for example, from Minkowski spacetime to de Sitter spacetime. Minkowski spacetime is qualitatively different from de Sitter spacetime, no matter how small the value of $\Lambda$ . Furthermore, the $\Lambda \rightarrow 0$ limit is not continuous, as illustrated by the contrast in conformal structure. This suggests that the renormalization group trajectory for $\Lambda$ should be defined over a space of metrics, not just over the values of parameters appearing in the Lagrangian.
In sum, the scaling behavior of $\Lambda$ within the EFT approach signals a dependence on highenergy physics, and we have argued that it also cannot be consistently described with respect to a single fixed background spacetime. This raises the broader question of what we need to assume regarding spacetime to apply EFT techniques, which we turn to next.
3. Spacetime for effective field theoriesFootnote ^{16}
Effective field theories, as generalized from renormalizable QFTs, implicitly rely on certain background spacetime structures. Both topdown and bottomup construction procedures partition the degrees of freedom for a system into those relevant to the EFT and those outside of its domain. Typically, the EFT describes lowenergy, fluctuating modes against a backdrop of highenergy modes that remain in an adiabatic ground state. Such a description relies on separating the degrees of freedom based on their energy, which requires a welldefined notion of energy, as well as a sufficiently stable cutoff point to sort high from lowenergy degrees of freedom. This means that the spacetime on which the EFT is defined must have something approximating a timelike killing vector field. This is a demanding requirement, not satisfied by, for example, the FLRW models used in relativistic cosmology. This does not threaten the insights gained from treating GR as an EFT, applied to problems that assume either a Minkowski background or some other background with sufficient structure (at least approximately) to identify the relevant degrees of freedom. Yet it does raise the question of how much insight we can gain from EFT methods regarding the cosmological constant.
Applications of EFT methods proceed, schematically, by identifying lowenergy degrees of freedom and symmetries, then writing the most general effective action for these degrees of freedom compatible with these symmetries. The earlier form of the effective action (eq. [2]) follows by treating the lowenergy degrees of freedom as gravitons (spin2 fields), along with matter degrees of freedom, and requiring diffeomorphism invariance and local Lorentz invariance for terms in the expansion. There are several other ways of applying EFT techniques to gravitational physics, such as the “nonrelativistic GR” approach (Goldberger and Rothstein Reference Goldberger and Rothstein2006) developed to study the inspiral phase of merging compact objects and the radiation they emit.Footnote ^{17} This approach takes advantage of the separation of scales between the extended compact objects and gravitational perturbations, integrating out the degrees of freedom associated with the objects and treating them as point particles, and starts from a different effective action. EFT techniques have also been applied to the study of structure formation in cosmological models, based on an action that describes a coupled scalar field–metric system satisfying the FLRW symmetries.Footnote ^{18}
Here, we will focus on an EFT constructed for gravity based on the effective action given in equation (2). EFT calculations based on this action have led to several seminal results, such as Donoghue’s expression for the leadingorder quantum corrections to the Newtonian potential between nonrelativistic particles.Footnote ^{19} The higherorder terms in the Lagrangian scale with the inverse powers of the Planck mass, so the quantum corrections are extremely small. Because there is a much larger separation of scales here than in other areas of physics, the EFT for GR is sometimes described as the best EFT. Yet the cosmological constant is not dynamically relevant in this calculation, which proceeds in Minkowski spacetime. Donoghue (Reference Donoghue2012), for example, explicitly treats $\Lambda$ as one of the EFT parameters to be fixed by observations, and he simply sets it to zero in calculating the quantum corrections while noting that it is unimportant in this domain. As we will see, this is only permissible when we have an external reason to think that $\Lambda$ is not physically relevant.
Extending beyond Minkowski spacetime, it is still necessary to identify the degrees of freedom to be included in the action and draw the contrast between high and lowenergy modes. As we noted earlier, this is possible in static spacetimes with a timelike Killing vector field. In static spacetimes, we have a welldefined separation of energy scales—and therefore a welldefined notion of energy conservation—and can identify a stable ground state. Furthermore, we can construct a conserved energy that is bounded from below. This naturally gives rise to a welldefined vacuum state as the lowestenergy eigenstate of the Hamiltonian operator. In general, a frequency splitting for matter fields can be carried out as well. Given all of this, we can identify perturbations around this vacuum and create a Fock space of fluctuations, and also distinguish between low and highenergy states, in order to apply EFT methods.
Physicists have successfully applied EFT techniques to spacetimes that have approximately static regions and those that are symmetric “at infinity” (i.e., quasistatic and asymptotically flat spacetimes, respectively; see Burgess [Reference Burgess2004] for an overview). For the former, as long as a local, approximate notion of energy remains well defined for the timescales relevant to the problem, one can construct an approximately conserved Hamiltonian and create an approximate division into high and lowenergy modes. But these relaxed conditions still depend on an approximately welldefined separation of energy scales. For backgrounds on which these approximations fail for the distance and timescales of interest, the EFT construction procedure cannot get off the ground.
In asymptotically flat spacetimes, one can exploit the Minkowskian structure at infinity to define conserved energies and ground states. Provided that one is interested in effects observable far away from the central region with complex gravitational dynamics, it is reasonable to expect that EFTs provide a good basis for calculation. This is the assumption behind EFT calculations of Hawking radiation measured far from the event horizon of a black hole.
One generalization most relevant to the domain of cosmology is that to slowly varying timedependent background spacetimes. In general, one cannot construct an EFT without energy conservation because EFTs organize and separate states according to energy. However, if the time evolution is adiabatic—that is, the metric and other timedependent fields vary sufficiently slowly compared to the ultraviolet (UV) scales of interest—one can construct an approximately conserved Hamiltonian, an approximate ground state, and an approximate (timedependent) low/highenergy split (cf. Burgess (Reference Burgess2017)). Adiabatic evolution is then indexed to particular domains of a spacetime solution. Where adiabaticity fails, one can encounter crossing of energy scales, from the EFT $p \lt \Lambda (t)$ to the highenergy regime $p \gt \Lambda (t)$ , and vice versa.
So far we have focused on the spacetime structure needed to identify the degrees of freedom of interest, to take the first step in constructing an EFT. But the full force of the EFT framework provides more than just this first step. It is not enough to cut off highenergy degrees of freedom; we must also ensure that the resulting theory has the appropriate interscale insensitivity using renormalization group methods. Without this assurance, we have effectively finetuned a solution that is neither selfconsistent nor robust to perturbations at energy scales that are supposed to have been screened off. We can think of this as a twostage process for setting up EFTs. First, can we write down an EFT at a given scale, setting its couplings to those determined empirically, and use this to calculate leadingorder quantum effects on gravity? We argue that the answer to this question is yes: the successful applications of EFTs in gravity described earlier take this form. Second, can we then extend this EFT to different (higher or lower) energy scales, using the scaling properties typical of flatspace EFTs? The answer to this question is no: the failure of naturalness for Lambda ruins the possibility of a selfconsistent background metric that grounds the notion of energy and that of scale separation for the EFTs at different energies.
The scaling behavior discussed at the end of the previous section raises a different challenge. The EFT describes lowenergy degrees of freedom propagating with respect to a fixed background, such as a vacuum solution or thermal state. Can we assume that the background used in the EFT is consistent with the solutions to field equations of the full theory? This can be proven to hold (in cases where the full theory is known) if the background fields evolve adiabatically (see Burgess Reference Burgess2017). But the background fields do not remain static if quantum corrections to the action have the form of an effective cosmological constant term. As discussed in section 2.2, a spacetime background upon which an EFT is constructed will change drastically under an otherwise standard scaling transformation of the EFT, undermining the selfconsistency of the full EFT treatment. The backreaction of such contributions on the metric may be negligible in relatively small spacetime regions, but they have a cumulative effect at large distances and over long times. Hence, it is strikingly implausible to assume that the EFT background matches a solution to the fullfield equations at large scales. Yet these are precisely the scales at which the dynamical effects of a cosmological constant term would become apparent.
These backreaction effects can take a different form in the deep IR as well. Assume that we can model a “patch” of a given spacetime using either Minkowski spacetime or some other fixed background spacetime. In that case, we can use Riemannian normal coordinates in a local patch, and we can make it clear in what sense the spacetime “looks locally Minkowskian.” Thus, within that patch we have a welldefined background on which to construct an EFT. We can similarly construct local patches over other regions of the spacetime. However, to be able to stitch these together, we would need to impose a strong constraint on the metric (or on the curvature) that is not likely to hold in general. Donoghue (Reference Donoghue2009) takes this situation as a novel illustration of “how EFTs fail,” in that they cannot adequately describe the “buildup” of effects as we patch such local descriptions together.
Both points challenge the assumption that we have adequate control over the background solution to establish the selfconsistency of the EFT. The consistency challenge arises if we try to maintain both: (i) we will carry out EFT calculations describing spin2 and matter fields propagating over Minkowski spacetime (or in a static, curved spacetime), and (ii) the quantum fields contribute to a nonzero cosmological constant as a result of radiative corrections. If (ii) is accepted, then the assumption that the spacetime background is Minkowski (or static) is, at best, an approximation for limited regions. The appropriate background at larger scales—if one can even be defined—should instead be de Sitter spacetime because of the large $\Lambda$ contribution. When looked at internally, we arrive at the reductio of the CCP. Externally, we have shown that one should not expect EFT methods to get off the ground in spacetime settings without approximate or asymptotic temporal symmetries relative to the physics of interest. Spacetimes where $\Lambda$ is dynamically relevant, including FLRW cosmological models, fall outside the domain of current EFT treatment. New conceptual resources are needed when it is relevant. We outline some potentially promising avenues in the next section.
4. Unnatural solutions
As Kuhn (Reference Kuhn1962) recognized, criticisms of an appealing approach rarely lead scientists to abandon it unless there is an available alternative. We have argued earlier that the EFT program is illsuited for dealing with global features of cosmology and, in particular, that the CCP is a signpost that something has gone wrong. Although there is not yet a clear alternative, there are several lines of work that aim to reformulate the foundational principles of flatspace QFT. These avenues of research show that the separation of energy scales, far from being a precondition for the possibility of science, is not an essential feature of current speculative physics. To be clear, we do not expect the EFT approach to be entirely replaced, given successes such as the EFT methods applied to GR mentioned earlier. Rather, the CCP forces us to acknowledge the limitations of the EFT approach, alongside the need for new ideas regarding the global properties of quantum fields coupled to gravity. In this section, we briefly outline three research programs that reject some of the basic EFT concepts: quantum field theory on curved spacetimes, the UVIR correspondence, and the breakdown of locality from string theory. Some of these approaches reject the EFT framework in the context of matter fields on classically curved spacetime backgrounds, whereas others reject it directly for gravitational degrees of freedom. In either case, these research programs highlight the ways in which the decoupling of energy scales fails in the cosmological solutions relevant to the problem at hand.
QFT on curved spacetimes
This approach replaces foundational concepts in QFT with generalized versions appropriate for generic curved spacetime backgrounds. It takes to heart lessons from GR in aiming to construct quantum field theories in a way that depends only on local spacetime properties. The spacetime background is still treated classically, with the generalizations focused on the equations governing matter fields. Conceptual reengineering focuses on the spectrum condition, Poincaré covariance, and the existence of a unique vacuum state because all of these depend on or follow from symmetries of Minkowski spacetime. The ambitions of this approach do not extend to including backreaction of the quantum fields on spacetime; this is not a quantum theory of gravity, and it is contentious how much it contributes to formulating one.Footnote ^{20} Essentially, this approach deals only with understanding matter degrees of freedom and ignores the treatment of gravity as an EFT. Nonetheless, it highlights one way to make fundamental changes to our understanding of QFT, as well as the resulting effects on the EFT framework and the CCP.
For the sake of definiteness, we focus here on the axiomatic approach pursued by Hollands and Wald (for reviews, see Reference Hollands and Wald2010, Reference Hollands and Wald2015), based on constructions of simple scalar $\phi ^4$ models on globally hyperbolic, but otherwise generically curved, spacetimes. They work in a positionspace representation and use operator product expansions as the basic local building blocks for a QFT, rather than the Fockspace momentum representation on which Minkowski QFTs are built. Although a Fockspace representation for free fields is not necessarily required for a generic EFT, one often does assume that many of the symmetries of Minkowski spacetime hold locally. Moreover, if transition amplitudes are meant to be transitions between privileged, welldefined particle states, then the Fockspace construction is required. Hollands and Wald abandon this framework: Poincaré covariance is generalized to a local general covariance of the fields, and the positivefrequency condition for fields is characterized locally in terms of the singularity structure of the npoint functions of fields. This is the microlocal spectrum condition: it encodes the same information as the positivefrequency condition, but it does so in a local way that does not depend on the global structure of spacetime.
The key conceptual change is the lack of a vacuum state as the basis for constructing a QFT. On curved spacetime backgrounds, there is no privileged global vacuum state, and therefore nothing that has the correct symmetries and invariance properties to play the role of a cosmological constant term once gravitational degrees of freedom are included. This lack of a preferred vacuum marks a sharp contrast with conventional QFT, which often aims to calculate correlation functions for quantum fields in their vacuum states. Furthermore, renormalization techniques in flat spacetime implicitly utilize the preferred vacuum state in order to handle products of field operators. These renormalization procedures are often presented as subtracting divergences mode by mode. By contrast, Hollands and Wald’s local and covariant formulation of QFT in curved spacetimes has to do without a globally defined preferred state or a division into positive/negativefrequency modes. The treatment of renormalization they develop is by necessity holistic (see §3.1 in Hollands and Wald Reference Hollands and Wald2015): products of field operators have to be renormalized with respect to a locally defined quantity (the Hadamard distribution), and this cannot be interpreted as a modebymode subtraction. As a result, it is difficult to see how to implement the division between low and highenergy modes that is crucial to EFT methods.
If this approach is used as a starting point for quantizing gravity, it is unclear how one would construct EFTs. The approximation of small perturbations about a static or asymptotically flat background fails for generic globally hyperbolic spacetimes, as we have argued in the previous section. This approach to QFTs on curved spacetimes illustrates one way of rethinking the foundations of QFT and the basic elements of renormalization when merging gravitational and matter degrees of freedom. By considering how curved spacetime backgrounds change the construction of QFTs, one is less tempted to inappropriately generalize the successes of the EFT framework to generic globally hyperbolic spacetimes.
Breakdown of naturalness from quantum gravity
Although there is not yet a satisfactory theory of quantum gravity, we can look to the most developed speculative theories to see the ways in which the EFT approach might break down. By focusing on insights from candidate theories, such as string theory, we can gain insight into the ways that lowenergy physics might be affected by a future complete theory of quantum gravity. There are multiple ways that EFT methods could break down in any theory of quantum gravity. First, the successor theory might introduce new physics at some intermediate scale (below the Planck scale) that naive EFT approaches would miss if they are taken to be applicable right up to the Planck scale. Second, the emergence of spacetime would place limitations on the applicability of EFTs. The assumption of a static or asymptotically flat spacetime background must break down when the very concepts of space and time also break down. In regimes where spacetime concepts fail to apply, the ideas of background spacetime, locality, and separation of energy scales also fail to apply.
These two generic “breakdowns” are of less interest because neither would require a fundamental reconfiguration of the EFT approach. In the former case, the general EFT methodology would still apply, and one would simply have to lower the upper limits of applicability of EFTs to the new mass scale. In the latter case, as long as all interactions in the successor theory are local—at scales where the concept of locality remains relevant—EFTs would be insensitive to the breakdown of spacetime at scales far below the breakdown.
One more interesting problem related to the latter involves theories of quantum gravity whose fundamental degrees of freedom are nonspatiotemporal. At a more fundamental level, one might worry that a global separation of energy scales makes no sense for the fundamental degrees of freedom. The conceptual understanding of integrating out highenergy degrees of freedom would then break down in light of the new theory because the concept of “high energy” may not be well defined. There is little reason to suspect that fundamentally nonspatiotemporal degrees of freedom can fit the EFT framework and that their effects will be limited to renormalizations of coupling constants or additional local interactions.
Other interesting problems arise when we consider specific theories of quantum gravity. String theory raises two potential problems with the EFT approach. First, string theoretic Tduality links the UV and IR energy scales. We save this until the next section because a UVIR correspondence can arise in other contexts (e.g., doublefield theory). The second problem is the breakdown of locality at the string scale. Because strings are extended objects, at length scales comparable to that of the strings, the idealization of treating string interactions as local point interactions breaks down. This may not be a problem in static or asymptotically flat spacetimes because the nonlocalities at the string scale are unlikely to have impacts at larger distances (ignoring the deep IR scales and Tduality). If stringscale interactions do not grow or cascade over time, then EFTs at much lowerenergy scales can deal effectively with nonlocal string interactions the same way that any QFT does: by approximating the string length to be zero and treating the interactions as local. At energy scales much lower than the string scale, this approximation will hold, and the EFT approach should proceed without problems.
How do these considerations bear on extending EFT techniques beyond static or asymptotically flat spacetimes? In cosmological spacetimes, we need to ensure that there are no cascading effects across energy scales that would lead to a stretching of nonlinear effects originating at the string scale. One concrete example where this has been conjectured is in inflation. In a rapidly expanding universe, Planck or stringscale fluctuations would stretch rapidly; if inflation went on for a long enough time, these fluctuations could cross the Hubble radius and classicalize. Bedroya et al. (Reference Bedroya, Brandenberger, Loverde and Vafa2020) proposed a transPlanckian censorship conjecture, ruling out by fiat the possibility of Planckscale modes crossing the Hubble radius. The censorship conjecture limits the length of inflation to be short enough that Planckscale fluctuations cannot grow to a size comparable with the Hubble radius. Further, this only rules out Planckscale physics; if nonlocalities arise at $l_{\rm{string}} \ll l_{\rm{Planck}}$ , then the constraint on the length of the inflationary epoch is even more restrictive. The censorship conjecture is a patchy fix, and Bedroya et al., acknowledge that something beyond an EFT approach may be needed to properly address the problem. New ideas that can reproduce the power spectrum of anisotropies in the cosmic microwave background radiation may be needed for the early universe because inflation stretches EFT methods beyond their proper domain of applicability. Some alternatives to inflation inspired by string theory are the emergent universe from a string gas and an ekpyrotic bounce universe (Brandenberger Reference Brandenberger2014). Loop quantum cosmology also posits a bouncing universe. All of these are capable of producing a nearly scaleinvariant power spectrum for anisotropies, and so they may be alternatives to inflation. Additionally, two of these approaches—loop quantum cosmology and string gas cosmology—also abandon the standard EFT framework. Instead of a bottomup EFT construction, they start with the highenergy theory and work down to the appropriate limiting domains applicable to earlyuniverse physics. Although inflation is by far the most welldeveloped approach to understanding structure formation in the early universe, these competitors show how one could move forward outside of the context of EFTs.
UVIR correspondence
One final approach to speculative physics that rejects decoupling and the EFT framework is the idea of a symmetry or correspondence between highenergy (UV) and lowenergy (IR) degrees of freedom. It is relatively obvious how a UVIR correspondence would fall outside of the EFT approach: at very high and very low energies, physical effects are sensitively coupled, so EFTs cannot be used to integrate out the effects of highenergy physics, at least without knowing the exact form of the highenergy theory. A UVIR correspondence could apply directly to gravitational degrees of freedom or to matter degrees of freedom. Typically, a UVIR correspondence is discussed in relation to the Tduality symmetry in string theory, where there is a transformation between degrees of freedom at the distance scale R and its inverse $1/R$ , where distance is in string units. In the case of the cosmological constant, one might explain its presence and particular observed value as a remnant from some highUV effects because cosmological distance scales are in the deep IR. From the point of view of string theory, Tduality has implications for both matter and gravitational degrees of freedom, and it therefore has the potential to link the two with the cosmological constant.
In one sense, the EFT framework is capable of accommodating Tduality because it is strictly another symmetry from the highenergy theory. One must include in their EFT description a spacetime symmetry mapping $x \rightarrow 1/x$ in the appropriate units. This is the project pursued by doublefield theory. Essentially, one constructs an EFT with dual copies of the fields of interest, such that the duality is a symmetry of this expandedfield theory. Although this approach still falls within the EFT framework, naturalness is clearly violated, and it is unclear how the cosmological constant can arise in the context of doublefield theory (Aldazabal, et al. Reference Aldazabal, Marques and Nunez2013). Under the toroidal compactification approach in doublefield theory, setting the flux through the torus equal to zero leads to vanishing scalar potentials and therefore no cosmological constant term. However, more complicated compactifications may allow for a scalar potential to play the role of a cosmological constant term. Even though this approach leads to a type of EFT, it is of a very different character from the standard EFTs defined on a single configuration space. In particular, the decoupling assumption is violated because high energies in one set of degrees of freedom correspond to low energies in the other. Other approaches to handling a UVIR correspondence may make further, more drastic modifications to the standard EFT approach.
5. Conclusions
The cosmological constant problem should be regarded as a reductio ad absurdum. As such, it shares the frustrating feature of any reductio argument, in that it shows that the set of starting assumptions leads to an absurd result without indicating where the error lies. Out of the assumptions generating the problem, we have argued that one should reject the application of EFT methods to the far IR of cosmological spacetimes. Standard EFT methods depend on having specific types of background spacetime structure. Although some generalization beyond Minkowski and Euclidean spacetimes is possible, EFTs cannot yet be constructed on generic spacetimes. As shown in section 3, EFTs require global properties to make a meaningful split between relevant and irrelevant scales, near static backgrounds to ensure this boundary doesn’t change, and minimal backreaction effects on the spacetime background. None of these conditions holds for almostFLRW spacetimes over cosmic scales of distance and time. The successes of treating GR as an EFT are limited to special cases where the background spacetime has sufficient structure, where backreaction effects are negligible, and where the cosmological constant can be ignored.
In general, induction from the success of limited examples to a greater scope of applicability is a good strategy for scientific inquiry. The great success of EFT methods might lead one to suspect that the issues with generalizing to curved spacetimes are merely transient and that the EFT methodology should not be abandoned. This is the approach that many working on solutions to the CCP have taken, implicitly or explicitly, and corresponds to the modifydynamics approach outlined in the introduction. This is not an unreasonable approach, and it takes the reductio seriously by making local modifications to particular dynamical theories, rather than our proposal to reject the applicability of the EFT framework. Some solution strategies involve adding new symmetries (e.g., supersymmetry, doublefield theory), whereas others involve adding new fields relevant at scales just above the Standard Model or modifying the coupling between gravity and vacuum energy. In most of these, one rejects decoupling and therefore concedes that some lowenergy phenomena are tightly linked to highenergy physics. If one is confident in the EFT methodology, local modifications are still necessary to solve the problem.
However, the arguments in section 3 show the intrinsic limitations of the EFT framework and should temper any expectations that generalizations to cosmology will be successful. It could turn out that EFT methods will generalize, but such a generalization will require significant conceptual modifications to the basic assumptions, such as a global separation of energy scales. The key assumption that fails for the standard EFT approach here is the assumption of naturalness, in the form of autonomy of scales. We have seen that the cosmological constant is highly sensitive to the choice of regularization procedure and the value of the regulator, indicating that it is sensitive to the details of highenergy physics. The apparent failure of naturalness in the CCP is well known, and it is often taken as part of a more general issue in physics. Besides the CCP, naturalness issues arise in the hierarchy problem in particle physics. Some argue that any failure of naturalness will have wideranging consequences for the metaphysics and epistemology of science generally (see Wallace [Reference Wallace2019] and references therein). We have argued that such sweeping conclusions are unwarranted. Section 4 highlights some approaches that accept a local failure of naturalness within cosmology, rather than attempting to restore it or make radical changes to our understanding of science. Although a postnaturalness highenergy physics creates significant new theoretical challenges, we think this is an avenue worthy of serious pursuit.Footnote ^{21}
We end by noting that the separation of scales fails elsewhere in physics, and therefore the naturalness problems arising in gravitational EFTs are not sui generis. Nonlinear dynamical systems can exhibit inverse energy cascades, with energy transfer from short scales to longer scales—such as in turbulent motion. Such features would spoil the separation of scales essential to applying EFT methods. Mesoscale modeling also requires significant input from physics at distinct scales. Batterman (Reference Batterman and Batterman2013) has argued extensively in favor of mixing micro and macroscale modeling techniques in condensed matter physics and materials science, for example. Without accounting for physical effects at distinct scales, one misses important features of bulk materials. These phenomena are certainly more challenging to model and predict than those in which scales evolve autonomously. We can take some comfort in recognizing that this kind of challenge is not unique to quantum gravity and that physicists have developed techniques to handle the failure of scale separation effectively in other contexts.
Acknowledgments
We are grateful to Mike Schneider, Robert Brandenberger, Marie Gueguen, Niels Linnemann, Dimitrios Athanasiou, and four anonymous reviewers for helpful feedback on earlier drafts of this work. This work was supported by the John Templeton Foundation Grant 61,048. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation.