Hostname: page-component-cb9f654ff-c75p9 Total loading time: 0 Render date: 2025-08-11T12:41:13.158Z Has data issue: false hasContentIssue false

Sobolev spaces via chains in metric measure spaces

Published online by Cambridge University Press:  04 August 2025

Emanuele Caputo*
Affiliation:
Mathematics Institute, University of Warwick, Zeeman Building, Coventry CV4 7AL, United Kingdom (emanuele.caputo@warwick.ac.uk)
Nicola Cavallucci
Affiliation:
Institute of Mathematics, EPFL, Station 8, 1015 Lausanne, Switzerland (n.cavallucci23@gmail.com)
*
*Corresponding author.
Rights & Permissions [Opens in a new window]

Abstract

We define the chain Sobolev space on a possibly non-complete metric measure space in terms of chain upper gradients. In this context, ɛ-chains are finite collections of points with distance at most ɛ between consecutive points. They play the role of discrete curves. Chain upper gradients are defined accordingly and the chain Sobolev space is defined by letting the size parameter ɛ going to zero. In the complete setting, we prove that the chain Sobolev space is equal to the classical notions of Sobolev spaces in terms of relaxation of upper gradients or of the local Lipschitz constant of Lipschitz functions. The proof of this fact is inspired by a recent technique developed by Eriksson-Bique in Eriksson-Bique (2023 Calc. Var. Partial Differential Equations 62 23). In the possible non-complete setting, we prove that the chain Sobolev space is equal to the one defined via relaxation of the local Lipschitz constant of Lipschitz functions, while in general they are different from the one defined via upper gradients along curves. We apply the theory developed in the paper to prove equivalent formulations of the Poincaré inequality in terms of pointwise estimates involving ɛ-upper gradients, lower bounds on modulus of chains connecting points and size of separating sets measured with the Minkowski content in the non-complete setting. Along the way, we discuss the notion of weak ɛ-upper gradients and asymmetric notions of integral along chains.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Royal Society of Edinburgh.

1. Introduction

A fundamental research direction in analysis on metric spaces is the development of calculus with Sobolev functions and Lipschitz functions defined on metric measure spaces $({\rm X},{\sf d},\mathfrak m)$. After a first approach due to Cheeger [Reference Cheeger14], Shanmuganlingam [Reference Shanmugalingam31] defined the Sobolev seminorm of a Borel function $u\colon {\rm X} \to \mathbb{R}$ as the infimum of the $L^p({\rm X})$-norms of all upper gradients of u, where a function $g\colon {\rm X} \to [0,+\infty]$ is an upper gradient of u if the following weak version of fundamental theorem of calculus holds for every rectifiable curve $\gamma\colon [0,1] \to {\rm X}$:

\begin{equation*} |u(\gamma_1)-u(\gamma_0)| \le \int_0^1 g(\gamma_t)|\dot{\gamma}_t|\,{\mathrm d} t. \end{equation*}

We refer the reader to the classical textbook [Reference Heinonen, Koskela, Shanmugalingam and Tyson22].

Another classical approach in the subject concerning the case p > 1 is based on a relaxation procedure of appropriate functionals with respect to the $L^p({\rm X})$-topology, playing the role of the Dirichlet energy. In $\S$ 2.1, we will recall these relaxation procedures in detail. On one side, Cheeger in [Reference Cheeger14] considered the relaxation of the $L^p({\rm X})$-norm of upper gradients. Here we denote the corresponding Banach space by $H^{1,p}_{{\rm curve}}({\rm X})$. On the other hand, Ambrosio, Gigli and Savaré in [Reference Ambrosio, Gigli and Savaré4, Reference Ambrosio, Gigli and Savaré5] studied the relaxation of the $L^p({\rm X})$-norm of the local Lipschitz constant ${\rm lip}\,u$ of Lipschitz functions u. The corresponding Banach space is denoted by $H^{1,p}_{{\rm AGS}}({\rm X})$. For details on these spaces we refer to $\S$ 3.

Other approaches are available, like the ones defined via integrations along test plans [Reference Ambrosio, Gigli and Savaré4, Reference Ambrosio, Gigli and Savaré5], but this approach will not be used in this work. We refer the reader to the recent survey [Reference Ambrosio, Ikonen, Lučić and Pasqualetto6].

The main results of [Reference Ambrosio, Gigli and Savaré4, Reference Ambrosio and Di Marino2, Reference Eriksson-Bique18, Reference Lučić and Pasqualetto30] show that the spaces $H^{1,p}_{{\rm curve}}({\rm X})$ and $H^{1,p}_{{\rm AGS}}({\rm X})$ coincide for $p\ge 1$, if the metric space $({\rm X},{\sf d})$ is complete. However, for non-complete metric spaces they can be different, see example 3.4.

This work derives from the following question: is it possible to show that $H^{1,p}_{{\rm AGS}}({\rm X})$ is equal to a space obtained via relaxation in terms of a suitable notion of upper gradients when the metric space $({\rm X},{\sf d})$ is not assumed to be complete?

The answer is affirmative and leads to an alternative definition of Sobolev or BV spaces, expressed in terms of chains instead of curves. We recall that a ɛ-chain, for ɛ > 0, is a finite collection of points ${\sf c} = \{q_i\}_{i=0}^N$ such that ${\sf d}(q_i,q_{i+1})\le \varepsilon$ for every $i=0,\ldots,N-1$.

The integration along rectifiable paths into Shanmugalingam’s definition is replaced by integration along ɛ-chains. With this analogy in mind, a function g is a ɛ-upper gradient of u provided that

\begin{equation*} |u(q_N)-u(q_0)| \le \sum_{i=0}^{N-1}\frac{g(q_i) + g(q_{i+1})}{2}{\sf d}(q_i,q_{i+1}) =:\int_{{\sf c}} g, \end{equation*}

for every ɛ-chain ${\sf c} = \{q_i\}_{i=0}^N$. This can be seen as a discrete analogue of the integral along a rectifiable path, see proposition 4.4 for a precise statement. The set of all ɛ-upper gradients of u is denoted by ${\rm UG}^{\varepsilon}(u)$. The corresponding functional is

\begin{equation*} {\rm F}_{\mathscr{C}^{}} \colon L^{p}({\rm X}) \to [0,+\infty], \quad u \mapsto \lim_{\varepsilon \to 0} \inf\left\{\| g \|_{L^p({\rm X})}\,:\, g\in {\rm UG}^{\varepsilon}(u)\right\}. \end{equation*}

The Banach space obtained by relaxation of ${\rm F}_{\mathscr{C}^{}}$ is denoted by $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})$.

The first result shows the equivalence of the spaces introduced so far, if the metric space is complete.

Theorem 1.1. Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space such that $({\rm X},{\sf d})$ is complete. Then

\begin{equation*} H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X}) = H^{1,p}_{{\rm AGS}}({\rm X}) = H^{1,p}_{{\rm curve}}({\rm X}), \end{equation*}

and

\begin{equation*} \|u \|_{H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})} = \|u \|_{H^{1,p}_{{\rm AGS}}({\rm X})} = \|u \|_{H^{1,p}_{{\rm curve}}({\rm X})}, \end{equation*}

for every $u\in L^p({\rm X})$.

The second result, which answers the aforementioned question, establishes the equality between $H^{1,p}_{{\rm AGS}}({\rm X})$ and $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})$ also for non complete metric spaces. It is obtained from theorem 1.1 and the fact that $H^{1,p}_{{\rm AGS}}({\rm X})=H^{1,p}_{{\rm AGS}}(\bar{{\rm X}})$ and $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})=H^{1,p}_{{\rm \mathscr{C}^{}}}(\bar{{\rm X}})$, where $\bar{{\rm X}}$ is the metric completion of ${\rm X}$, see proposition 3.3 and theorem 6.9.

Theorem 1.2. Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space, not necessarily complete. Then

\begin{equation*} H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X}) = H^{1,p}_{{\rm AGS}}({\rm X}), \end{equation*}

and

\begin{equation*} \|u \|_{H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})} = \|u \|_{H^{1,p}_{{\rm AGS}}({\rm X})}, \end{equation*}

for every $u\in L^p({\rm X})$.

Other possible notions of integral along chains are considered in the paper, leading to the definition of the same space.

1.1. The proof of the main results

The proof of theorem 1.1 is inspired by the approximation method developed by Eriksson-Bique in [Reference Eriksson-Bique18]. The relation between $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})$ and $H^{1,p}_{{\rm curve}}({\rm X})$ is not a priori clear. Therefore we introduce an auxiliary space, that we denote as $H^{1,p}_{{\rm \mathscr{C}^{},\, {\rm Lip}}}({\rm X})$ and it is defined as the domain of finiteness of the lower semicontinuous envelop of the following energy

\begin{align*} &{\rm F}_{\mathscr{C}^{},\, {\rm Lip}} \colon L^{p}({\rm X}) \to [0,+\infty], \quad & u \mapsto \begin{cases} \underset{\varepsilon\to 0}{\lim}\inf\left\{\| g \|_{L^p({\rm X})}\,:\, \right. &\text{if } u\in {\rm Lip}({\rm X}),\\ \quad \left. g\in {\rm UG}^{\varepsilon}(u) \cap {\rm Lip}({\rm X}) \right\}&\\ +\infty &\text{otherwise}. \end{cases} \end{align*}

The space is normed with the sum of the Lp-norm and the relaxation of the energy functional above. The functions for which ${\rm F}_{\mathscr{C}^{},\,{\rm Lip}}$ is finite play the role of regular functions, being Lipschitz and with Lipschitz upper gradients. They form a regular class of functions for which one could hope to get density in energy in $H^{1,p}_{{\rm curve}}({\rm X})$, in full generality.

On one side, one easily gets

(1)\begin{equation} H^{1,p}_{{\rm \mathscr{C}^{},{\rm Lip}}}({\rm X})\subseteq H^{1,p}_{{\rm curve}}({\rm X}) \qquad\text{and}\qquad H^{1,p}_{{\rm \mathscr{C}^{},{\rm Lip}}}({\rm X})\subseteq H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X}), \end{equation}

and that the inclusions are 1-Lipschitz.

The proof of theorem 1.1 is achieved into two steps, respectively proving that the reverse inclusions in (1) hold and are 1-Lipschitz.

To do both, we follow the mentioned approximation scheme in [Reference Eriksson-Bique18], that we briefly recall, in a simplified form, for reader’s convenience in the case of the proof of the inclusion $H^{1,p}_{{\rm curve}}({\rm X}) \subseteq H^{1,p}_{{\rm \mathscr{C}^{},{\rm Lip}}}({\rm X})$. For every given $u\in L^p({\rm X})$, proceed using the following steps.

  1. (Step 1) Reduce the proof to the case where u is bounded, with bounded support and nonnegative.

  2. (Step 2) For every upper gradient along curves g, define Lipschitz functions gj that converge to g pointwise and in $L^p({\rm X})$.

  3. (Step 3) Define the functions

    \begin{align*}u_j(x) := \inf& \left\{u(q_0) + \int_{\sf c} g_j \,:\, {\sf c} = \{q_i\}_{i=0}^N \ \text{is a }\frac{1}{j}\text{-chain such that }\right.\\ & q_N = x\bigg\}.\end{align*}

    Prove that uj is Lipschitz, has bounded support and it has gj as $\frac{1}{j}$-upper gradient.

  4. (Step 4) Conclude the proof showing that uj converges to u in $L^p({\rm X})$ via a contradiction argument. The contradiction is obtained by violating the fact that g is an upper gradient of u.

So, in order to prove that $H^{1,p}_{{\rm curve}}({\rm X}) \subseteq H^{1,p}_{{\rm \mathscr{C}^{},{\rm Lip}}}({\rm X})$ one could follow this scheme with minor modifications. However, there are technical issues in adapting the proof to show that $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X}) \subseteq H^{1,p}_{{\rm \mathscr{C}^{},{\rm Lip}}}({\rm X})$, that we present in a moment. That is why we will give a different proof of $H^{1,p}_{{\rm curve}}({\rm X}) \subseteq H^{1,p}_{{\rm \mathscr{C}^{},{\rm Lip}}}({\rm X})$ too, that can be easily adapted to show $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X}) \subseteq H^{1,p}_{{\rm \mathscr{C}^{},{\rm Lip}}}({\rm X})$.

The main problem is related to the reduction in Step 1. As stated, this is a consequence of the well-known locality property of the minimal p-weak upper gradient along curves, which can be derived from a Leibniz rule. The same locality property for p-weak ɛ-upper gradients does not hold, and it is also not clear if it is true in a limit sense for ɛ going to zero (see remark 6.5). However, a weak Leibniz rule for ɛ-chain upper gradients holds (see proposition 6.8). Using it we are able to reduce the proof, also for $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})$, to bounded functions with bounded support, but not necessarily nonnegative.

This difference creates two additional difficulties in the scheme sketched above. The first one is that the approximating functions uj do not necessarily have bounded support. This requires an additional cutoff argument. The second difference involves the core of the proof, namely the contradiction argument in Step 4. We need to analyse separately three different cases, one of which is the only one that needed to be considered in [Reference Eriksson-Bique18]. For more details we refer to Step 8 of the proof of theorem 6.4.

Moreover, the adaptation of the proof to the inclusion $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X}) \subseteq H^{1,p}_{{\rm \mathscr{C}^{},{\rm Lip}}}({\rm X})$ requires additional care in the contradiction argument in Step 4. This is due to the fact that every ɛ-upper gradient, that takes extended values, satisfies the upper gradient inequality along many curves but not necessarily along all of them, see proposition 4.4.

1.2. Application to characterizations of Poincaré inequality

A standard consequence of theorems like theorem 1.2 is the equivalence between a priori different formulations of the Poincaré inequality. Tailoring such a discussion in our case, we have the equivalence between the Poincaré inequality formulated for couples $(u, {\rm lip}\, u)$ with $u\in {\rm Lip}({\rm X})$ and (u, g) where u is Borel and $g \in {\rm UG}^{\varepsilon}(u)$ for some ɛ > 0, see corollary 7.1. This result holds also in non-complete metric spaces. While in the complete setting the conditions above are also equivalent to the Poincaré inequality formulated for couples (u, g) where u is Borel and g is an upper gradient of u, this is no more true if the metric measure space is not complete, see remark 7.2.

The Poincaré inequality on complete and doubling metric measure spaces can be characterized in at least three ways: Heinonen’s pointwise estimates in [Reference Heinonen21], Keith’s modulus estimates in [Reference Keith24] and via energy of separating sets for p = 1 in [Reference Caputo and Cavallucci12]. We reinterpret these characterizations also for non-complete spaces in terms of chains respectively in $\S$ 7.17.3. To treat the analogue of Keith’s estimate we need to introduce the notion of modulus of a family of chains and to study its basic properties. This is done in $\S$ 5.

Interestingly, the approach via chains allows to improve a result concerning upper gradients along curves by relaxing the assumptions on the metric measure space. Indeed in proposition 7.6 and remark 7.7 we prove the equivalence of the validity of the following Heinonen’s pointwise estimates at a fixed couple of points $x,y \in {\rm X}$:

\begin{equation*} \vert u(x) - u(y) \vert ^p \le C{\sf d}(x,y)^{p-1}\int ({\rm lip}\,u)^p \,{\mathrm d}\mathfrak m_{x,y}^L\qquad \text{for all }u \in {\rm Lip}({\rm X}), \end{equation*}

and

\begin{align*} \vert u(x) - u(y) \vert ^p \le C{\sf d}(x,y)^{p-1}\int g^p \,{\mathrm d}\mathfrak m_{x,y}^L& \qquad\text{for all}\,\, u\,\, \text{Borel and}\,\, g\,\, \\ &\qquad\quad \text{upper gradient of}\,\, u. \end{align*}

Here $\mathfrak m_{x,y}^L$ is the measure that is absolutely continuous with respect to the reference measure with density being the truncated Riesz potential with poles x and y (see $\S$ 7.1). This result was proved by the authors in [Reference Caputo and Cavallucci11, theorem A.3], under the additional assumption of local quasiconvexity of the metric space. The use of chains allows to remove this additional assumption. Similar improvements can be found, for example, in [Reference Eriksson-Bique and Poggi-Corradini20]. One motivation for this is explained in detail in remark 7.7. It is likely that similar improvements can be performed in other situations.

1.3. Relations to previous literature

Although not extensively studied, the idea of using ɛ-upper gradients was present in [Reference Keith and Rajala25], building upon previous ideas in [Reference Koskela and MacManus27]. There, the authors prove in the complete and doubling setting that the space satisfies the Poincaré inequality for the couple $(u, {\rm lip} u)$ with u Lipschitz if and only if it satisfies a one-parameter family of Poincaré inequalities for the couple $(u,D_\varepsilon u)$ with locally integrable u, with constants independent of ɛ. The function $D_\varepsilon u \colon {\rm X} \to [0,\infty]$, defined as

\begin{equation*} D_\varepsilon u(x):= \sup_{y \in B_\varepsilon(x)\setminus \{x \}}\frac{|u(y)-u(x)|}{\varepsilon}, \end{equation*}

is a variant of the slope of u at size ɛ, denoted as ${\rm sl}_\varepsilon u$ (see $\S$ 2 for its definition). The only if implication is proved by constructing, as in [Reference Koskela and MacManus27, lemma 4.6], an $(\varepsilon,1)$-upper gradient (in the sense of remark 4.1) of a suitable locally Lipschitz function and proving it is an upper gradient. We refer the reader to lemma 4.3 and proposition 4.4 for statements in a similar spirit.

Williams in [Reference Williams32] characterized the Sobolev space for p > 1 in terms of a sequence of energies defined via moduli of families of paths. More precisely, $u\colon {\rm X} \to \mathbb{R}$ belongs to $H^{1,p}_{{\rm curve}}({\rm X})$ if and only if

\begin{align*} \varliminf_{\varepsilon\to 0} E_\varepsilon(u) \lt \infty,\quad &\text{where } \\ & E_\varepsilon(u):=\varepsilon^p \, {\rm Mod}_p(\{\gamma \in C([0,1],{\rm X}):\,|u(\gamma(1))-u(\gamma(0))| \gt \varepsilon \}), \end{align*}

and in this case the limit inferior is a limit and equals $\|g_u\|_{L^p}^p$. For a fixed ɛ > 0, the energy $E_\varepsilon(u)$ is not related to the infimum of the Lp-norms of ɛ-upper gradients of u. For instance, an oscillating function $u\colon [0,1] \to [0,\frac{\varepsilon}{2}]$ has $E_\varepsilon(u) = 0$, while the infimum of the Lp-norms of its ɛ-upper gradients can be non-zero. Williams’ characterization uses properties of the image of u, while ours uses a discretization in the base space. Moreover, we cover also the case p = 1.

The authors in [Reference Jiang, Shanmugalingam, Yang and Yuan23] introduced the notion of local Hajłasz gradient. Namely, a function $g\colon {\rm X} \to [0,+\infty]$ is a local Hajłasz gradient of $u\colon {\rm X} \to \mathbb{R}$ if for every $x\in {\rm X}$ there exists a neighbourhood Ux such that for every $y,z \in U_x$ it holds that

\begin{equation*} \vert u(z) - u(y) \vert \le {\sf d}(y,z)(g(y) + g(z)). \end{equation*}

This is a non-quantified version of ɛ-upper gradients. Indeed, if g is a ɛ-upper gradient then $g/2$ is a local Hajłasz gradient, by remark 4.2. They prove that for every local Hajłasz gradient g, 4g is an upper gradient along curves with endpoints in $\{g \lt +\infty\}$. Our proposition 4.4 proves a similar, sharp statement for ɛ-upper gradients. We notice that their proof can be modified in order to get that for every Hajłasz upper gradient g, 2g is an upper gradient. This is the sharp constant.

Our proof of proposition 4.4 is substantially different from theirs. They apply McShane extension theorem to the sublevel sets of local Hajłasz gradients, while we use the fact that, in a suitable sense, the Lebesgue integral in dimension one can be seen as limits of Riemann sums.

They also relate Lp-integrable functions with Lp-integrable local Hajłasz gradients to the p-Newtonian Sobolev space. Under the additional assumption that the space satisfies a p-Poincaré inequality, they prove that the two normed spaces are equivalent. The p-Poincaré inequality in their proof is crucial, since they use the pointwise estimates for the maximal function in order to define a local Hajłasz gradient starting from an upper gradient. Instead, we work without the assumption that the space satisfies a p-Poincaré inequality. In order to do that, we use and modify the techniques in [Reference Eriksson-Bique18]. Theorem 1.1 shows that, for complete metric measure spaces, the Newtonian Sobolev space is isometric, and not only equivalent, to $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})$. As a by-product we also get the density of Lipschitz functions with Lipschitz upper gradients in the Sobolev space, a result of independent interest.

The main idea of [Reference Eriksson-Bique18] of exploiting chains in the approximation scheme, as we recalled in $\S$ 1.1, has found applications in other problems in analysis on metric spaces. Similar tools have been used, because of the lack of standing connectivity assumptions on the space, by Eriksson-Bique and Poggi-Corradini in [Reference Eriksson-Bique and Poggi-Corradini19, Reference Eriksson-Bique and Poggi-Corradini20]. The motivations are always the same: either approximating Sobolev functions with more regular functions (Lipschitz or continuous) or extending functions from a subset to the whole space.

This idea is used in [Reference Eriksson-Bique and Poggi-Corradini19] to prove the sharp modulus duality lower bound in metric spaces (this problem has a long history starting from classical works of Ahlfors and Beurling in the Euclidean plane [Reference Ahlfors and Beurling1]). In [Reference Eriksson-Bique and Poggi-Corradini20], the approximation scheme is used to prove that continuous functions are dense in norm in the Sobolev space for locally complete and separable metric measure spaces. They also show that the condenser capacity can be equivalently defined with classes of functions with different regularity. For a more general account of the theory and the relations of Eriksson-Bique’s result in [Reference Eriksson-Bique18] to the metric theory of Sobolev spaces, we refer the reader to [Reference Ambrosio, Ikonen, Lučić and Pasqualetto6, section 6].

We conclude the history of the subject around chains, by mentioning that the idea of using objects that are more flexible than curves in metric measure geometry dates back to the concept of curve fragments. They are biLipschitz images of compact subsets of the real line into the metric space. The concept was introduced by Bate to define Alberti representations of measures [Reference Bate7] and study Lipschitz differentiability spaces (see also [Reference Bate and Li9] and [Reference Eriksson-Bique17]). A chain may be thought as a degenerate curve fragment, where the compact set is a finite union of points in the real line. Bate and Li defined the concept of $\ast$-upper gradient in [Reference Bate and Li9] (extensively studied in [Reference Bate, Eriksson-Bique and Soultanis8]) to study differentiability spaces. The $\ast$-upper gradient condition asks a suitable upper gradient inequality along curve fragments. In particular, when restricted to the case of chains, as special cases of fragments, this is a different condition with respect to our notion of ɛ-upper gradient.

1.4. Structure of the paper

Section 2 contains general facts about measure theory, curves and chains on metric spaces. In $\S$ 3 we recall the definition of Sobolev spaces via a relaxation approach. In $\S$ 4 we define chain upper gradients and we derive basic properties and relations with the classical upper gradients along curves. Section 5 introduces the notions of modulus of a family of chains and of weak chain upper gradient. Similarities and differences with the theory of weak upper gradients along curves are shown. In $\S$ 6 we define the Sobolev spaces via chain upper gradients. In $\S$ 6.1 we prove the main results, theorems 1.1 and 1.2. Section 7 contains equivalent formulations of Poincaré inequality in the possibly non-complete setting in terms of chains.

2. Preliminaries

Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space. With this term, we mean that $({\rm X},{\sf d})$ is a separable metric space which is not necessarily complete and $\mathfrak m$ is a non-trivial outer measure which is Radon and finite on bounded sets. If $({\rm X},{\sf d},\mathfrak m)$ is a metric measure space in the sense of [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, p.62], i.e it is a triple such that $({\rm X},{\sf d})$ is complete and separable and $\mathfrak m$ is a non-trivial outer measure which is Borel regular and finite on bounded sets, then $\mathfrak m$ is Radon by [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, proposition 3.3.44], and so $({\rm X},{\sf d},\mathfrak m)$ is a metric measure space in the sense above. This is false if $({\rm X},{\sf d})$ is not complete. If a triple $({\rm X},{\sf d},\mathfrak m)$ is such $({\rm X},{\sf d})$ is separable and $\mathfrak m$ is a non-trivial outer measure which is finite on bounded sets, then $\mathfrak m$ is Radon if and only if the space $(\bar{{\rm X}},\bar{{\sf d}},\bar{\mathfrak m})$ is again a metric measure space as defined above (cp. [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, proposition 3.3.46]: the proof still works with our assumption of finiteness on bounded set in place of local finiteness), where $(\bar{{\rm X}},\bar{{\sf d}})$ denotes the completion of $({\rm X},{\sf d})$ and $\bar{\mathfrak m}$ is the outer measure $\bar{\mathfrak m}(E) := \mathfrak m(E\cap {\rm X})$ for every $E\subseteq \bar{{\rm X}}$. As a consequence of [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, proposition 3.3.46], under our assumptions

(2)\begin{equation} {\rm X} = B \cup N\ \text{where } B\ \text{is Borel in}\,\, \bar{{\rm X}}\,\, \text{and } \mathfrak m(N) = 0. \end{equation}

A metric measure space $({\rm X},{\sf d},\mathfrak m)$ is said to be doubling if there exists $C_D \ge 1$ such that

\begin{equation*} \mathfrak m(B_{2r}(x)) \le C_D \mathfrak m(B_r(x)) \qquad \text{for all } x\in {\rm X}, r \gt 0. \end{equation*}

If $({\rm X},{\sf d},\mathfrak m)$ is doubling and $({\rm X},{\sf d})$ is complete, then $({\rm X},{\sf d})$ is a proper metric space, i.e. every closed bounded set is compact.

We denote by $\mathcal{L}^{p}({\rm X})$ the space of functions $u\colon {\rm X} \to \mathbb{R}$ such that $\int \vert u \vert^p \,{\mathrm d} \mathfrak m \lt \infty$, and by $L^p({\rm X})$ its quotient by the equivalence relation that identifies two functions if they agree $\mathfrak m$-almost everywhere. The $L^p({\rm X})$ norm will be denoted by $\Vert \cdot \Vert_{L^p({\rm X})}$. The class of Lipschitz functions on ${\rm X}$ is denoted by ${\rm Lip}({\rm X})$.

A function $u\colon {\rm X} \to \mathbb{R}$ is bounded if there exists $M \ge 0$ such that $\vert u \vert \le M$. It has bounded support if there exists a bounded subset $B\subseteq {\rm X}$ such that $u\equiv 0$ on ${\rm X} \setminus B$. The slope of $u\colon {\rm X} \to \mathbb{R}$ at size ɛ > 0 is defined as

\begin{equation*} {\rm sl}_{\varepsilon} u(x):=\sup_{y \in B_\varepsilon(x)\setminus \{x \}}\frac{|u(y)-u(x)|}{{\sf d}(y,x)}. \end{equation*}

If $\varepsilon' \lt \varepsilon$ then ${\rm sl}_{\varepsilon'}u \le {\rm sl}_{\varepsilon} u$. The local Lipschitz constant of u is defined as

\begin{equation*}{\rm lip}\, u(x):=\lim_{\varepsilon \to 0} {\rm sl}_{\varepsilon} u(x)= \inf_{\varepsilon \gt 0} {\rm sl}_{\varepsilon} u(x).\end{equation*}

The local Lipschitz constant is sometimes denoted by ${\rm Lip}$ (as for instance in [Reference Cheeger14]). However, we prefer the notation ${\rm lip}\,u$, more consistent to modern works.

2.1. Relaxation of functionals

Let $1\le p \lt \infty$ and let ${\rm F}{}\colon \mathcal{L}^{p}({\rm X}) \to [0,+\infty]$ be a functional such that

  1. (a) ${\rm F}{}(0) = 0$,

  2. (b) ${\rm F}{}(u+v) \le {\rm F}{}(u) + {\rm F}{}(v)$,

  3. (c) ${\rm F}{}(\lambda u) = \vert \lambda \vert {\rm F}{}(u)$

for every $u,v\in \mathcal{L}^{p}({\rm X})$ and $\lambda \in \mathbb{R}$. By definition, the relaxation of ${\rm F}{}$ is the biggest functional $\tilde{\rm F}_{{\rm }} \colon \mathcal{L}^{p}({\rm X}) \to [0,+\infty]$ which is lower semicontinuous with respect to the $L^p({\rm X})$-norm and such that $\tilde{\rm F}_{{\rm }} \le {\rm F}{}$. A concrete description of $\tilde{\rm F}_{{\rm }}$ is given by

\begin{equation*} \tilde{\rm F}_{{\rm }}(u) = \inf\left\{\varliminf_{j\to +\infty}{\rm F}{}(u_j)\, : \, u_j \underset{L^p({\rm X})}{\longrightarrow} u\right\}. \end{equation*}

By the definition above, $\tilde{\rm F}_{{\rm }}$ induces a functional on $L^p({\rm X})$ which is lower semicontinuous and satisfies properties (a), (b) and (c) above. Therefore one can define the space

(3)\begin{equation} H^{1,p}_{{\rm F}}({\rm X}) := \left\{u \in L^p({\rm X})\,:\, \tilde{\rm F}_{{\rm }}(u) \lt +\infty \right\}, \end{equation}

endowed with the norm

\begin{equation*} \Vert u \Vert_{H^{1,p}_{{\rm F}}({\rm X})}^p := \Vert u \Vert_{L^p({\rm X})}^p + \tilde{\rm F}_{{\rm }}(u)^p. \end{equation*}

The normed space $(H^{1,p}_{{\rm F}}({\rm X}), \Vert \cdot \Vert_{H^{1,p}_{{\rm F}}({\rm X})})$ is a Banach space since $\tilde{\rm F}_{{\rm }}$ is $L^p({\rm X})$-lower semicontinuous. Indeed, given a $H^{1,p}_{{\rm F}}({\rm X})$-Cauchy sequence uj, that is also $L^p({\rm X})$-Cauchy, we can extract an $L^p({\rm X})$-limit u. By lower semicontinuity, ${\rm F}{}(u-u_k)\le \varliminf_{j\to +\infty} {\rm F}{}(u_j-u_k)$ for every k, thus

\begin{equation*} 0 \le \varlimsup_{k\to +\infty} {\rm F}{}(u-u_k)\le \varlimsup_{k\to +\infty}\varliminf_{j\to +\infty} {\rm F}{}(u_j-u_k)=0, \end{equation*}

where the last equality follows by $\{u_j\}$ being $H^{1,p}_{{\rm F}}({\rm X})$-Cauchy. Throughout the whole paper we will consider several functionals ${\rm F}{}$, where it can be readily checked that they always satisfy properties (a), (b) and (c) above.

2.2. Curves and chains

Let $({\rm X},{\sf d})$ be a metric space. A curve is a continuous function $\gamma \colon [a,b] \to {\rm X}$ for some $a,b \in \mathbb{R}$ with a < b. The starting and final point of γ are, respectively, $\alpha(\gamma):=\gamma(a)$ and $\omega(\gamma) := \gamma(b)$. The length of a curve γ is defined as

\begin{equation*} \ell(\gamma):=\sup \left\{\sum_{i=0}^{N-1} {\sf d}(\gamma_{t_i},\gamma_{t_{i+1}})\,:\,a=t_0 \lt t_1 \lt \cdots \lt t_N=b,\, N \in \mathbb{N} \right\}. \end{equation*}

A curve of finite length is called rectifiable. Every rectifiable curve $\gamma \colon [a,b] \to {\rm X}$ admits a reparametrization $s_\gamma \colon [0,\ell(\gamma)] \to [a,b]$ by arc-length. This means that the curve $\gamma' := \gamma \circ s_\gamma \colon [0,\ell(\gamma)] \to {\rm X}$ satisfies $\ell(\gamma'{|_{[0,t]}}) = t$ for every $t\in [0,\ell(\gamma)]$. Given two points $x,y\in {\rm X}$, we denote by $\Gamma_{x,y}$ the set of rectifiable curves γ with $\alpha(\gamma)=x$ and $\omega(\gamma)=y$.

The integral of a Borel function $g\colon {\rm X} \to [0,+\infty]$ over a rectifiable curve γ is

\begin{equation*} \int_\gamma g := \int_0^{\ell(\gamma)} g(\gamma(s_\gamma(t)))\,{\mathrm d} t. \end{equation*}

Let ɛ > 0. A ɛ-chain is a finite collection of points $\{q_i \}_{i=0}^N$ such that ${\sf d}(q_i,q_{i+1})\le \varepsilon$. The set of all ɛ-chains of ${\rm X}$ is denoted by $\mathscr{C}^{\varepsilon}({\rm X})$. The set of all chains of ${\rm X}$ is $\mathscr{C}^{}({\rm X}) := \bigcup_{\varepsilon \gt 0} \mathscr{C}^{\varepsilon}({\rm X})$. When the context is clear we simply write $\mathscr{C}^{\varepsilon}$ and $\mathscr{C}^{}$. More generally, if E is a subset of ${\rm X}$, we set $\mathscr{C}^{}(E) := \{{\sf c} = \{q_i\}_{i=0}^N \in \mathscr{C}^{} \,:\, q_i \in E \text{for some } 0\le i \le N\}$ and $\mathscr{C}^{\varepsilon}(E) := \mathscr{C}^{}(E) \cap \mathscr{C}^{\varepsilon}$. Notice that the two definitions of $\mathscr{C}^{\varepsilon}({\rm X})$ and $\mathscr{C}^{}({\rm X})$ are consistent.

The first and last points of ${\sf c} = \{q_i\}_{i=0}^N \in \mathscr{C}^{}$ are respectively $\alpha({\sf c}) := q_0$ and $\omega({\sf c}) = q_N.$ The concatenation of ${\sf c} = \{q_i\}_{i=0}^N \in \mathscr{C}^{\varepsilon}, {\sf c}' =\{q_i'\}_{i=0}^{N'} \in \mathscr{C}^{\varepsilon'}$ such that $\omega({\sf c}) = \alpha({\sf c}')$ is defined as

\begin{equation*} {\sf c}\star {\sf c}' = \{q_0,\ldots,q_N = q_0',q_1',\ldots,q_{N'}'\}. \end{equation*}

Notice that ${\sf c}\star {\sf c}' \in \mathscr{C}^{\varepsilon \vee \varepsilon'}$ and $\alpha({\sf c}\star {\sf c}') = \alpha({\sf c})$, $\omega({\sf c}\star {\sf c}') = \omega({\sf c}')$. The inverse of a chain ${\sf c} = \{q_0,\ldots,q_N\} \in \mathscr{C}^{}$ is $-{\sf c} := \{q_N,\ldots,q_0\}$. If $x,y\in {\rm X}$ are two points then we set $\mathscr{C}^{}_{x,y} := \{{\sf c} \in \mathscr{C}^{} \, : \, \alpha({\sf c}) = x, \omega({\sf c}) = y\}$ and $\mathscr{C}^{\varepsilon}_{x,y} := \mathscr{C}^{}_{x,y} \cap \mathscr{C}^{\varepsilon}$.

A metric space $({\rm X},{\sf d})$ is said to be ɛ-chain connected if $\mathscr{C}^{\varepsilon}_{x,y} \neq \emptyset$ for every $x,y \in {\rm X}$. A metric space can be decomposed in ɛ-chain connected components in the following way. Given two points $x,y\in {\rm X}$ we say that $x \sim_\varepsilon y$ if and only if $\mathscr{C}^{\varepsilon}_{x,y} \neq \emptyset$. This defines an equivalence relation on ${\rm X}$. Such a relation partitions ${\rm X}$ into a family of sets $\{A_i\}_{i \in I}$. If $({\rm X},{\sf d})$ is separable, it can be readily checked that the set of indices I is countable. Moreover, every set Ai is ɛ-chain connected: it is called a ɛ-chain connected component of ${\rm X}$. By definition, every ɛ-chain connected component is both open and closed. Moreover, we have that

(4)\begin{equation} {\sf d}(A_i,A_j) \ge \varepsilon,\qquad \text{if }i \neq j. \end{equation}

Let $\varepsilon' \le \varepsilon$ and let ${\rm X} =\bigcup_{i\in I_\varepsilon} A_i^\varepsilon =\bigcup_{i\in I_{\varepsilon'}} A_i^{\varepsilon'}$, be the two decompositions where $\{A_i^\varepsilon\}_{i\in I_\varepsilon}$ and $\{A_i^{\varepsilon'}\}_{i \in I_{\varepsilon'}}$ are respectively the ɛ and $\varepsilon'$-chain connected components of ${\rm X}$. Then, for every $i \in I_{\varepsilon'}$ there exists $j\in I_{\varepsilon}$ such that $A_i^{\varepsilon'} \subseteq A_j^{\varepsilon}$. The set of ɛ-chain connected components of ${\rm X}$ is denoted by $\mathscr{C}^{\varepsilon}\textrm{-cc}({\rm X})$.

Given ${\sf c}=\{q_i \}_{i=0}^N \in \mathscr{C}^{}$ and a function $g \colon {\rm X} \to [0,+\infty]$ we define

(5)\begin{equation} \int_{{{\sf c}}} g := \sum_{i=0}^{N-1}\frac{g(q_i) + g(q_{i+1})}{2}{\sf d}(q_i,q_{i+1}). \end{equation}

For every function g and every two chains ${\sf c},{\sf c}' \in \mathscr{C}^{}$ it holds

(6)\begin{equation} \int_{{\sf c} \star {\sf c}'} g = \int_{\sf c} g + \int_{{\sf c}'} g,\qquad \int_{-{\sf c}} g = \int_{\sf c} g. \end{equation}

Moreover, the integral over a fixed chain ${\sf c}$ is linear, i.e. $\int_{\sf c} a g + b h = a \int_{\sf c} g + b \int_{\sf c} h$ for every $g,h \colon {\rm X} \to [0,+\infty]$ and every $a,b \ge 0$.

The length of a chain ${\sf c} = \{q_i\}_{i=0}^N \in \mathscr{C}^{}$ is $\ell({\sf c}):=\int_{{\sf c}} 1 =\sum_{i=0}^{N-1}{\sf d}(q_i,q_{i+1})$.

Remark 2.1. There is no canonical way to define the integral over a chain. Let $\lambda \in [0,1]$. If $a,b \in \mathbb{R} \cup \{+\infty\}$ we set $[a,b]_\lambda := \lambda a + (1-\lambda) b$. Given ${\sf c}=\{q_i \}_{i=0}^N \in \mathscr{C}^{}$, a function $g \colon {\rm X} \to [0,+\infty]$ and $\lambda \in [0,1]$, we define the λ-integral of g over ${\sf c}$ as

\begin{equation*} {\vphantom{\int}}^\lambda{\!\!\!}{\int}_{{\!\!\!{\sf c}}} g := \sum_{i=0}^{N-1}[g(q_i),g(q_{i+1})]_\lambda{\sf d}(q_i,q_{i+1}). \end{equation*}

When $\lambda = \frac12$ we recover the definition in (5), while for λ = 1 we find the expression used in [Reference Eriksson-Bique18]. The λ-integral is linear in the sense above and it satisfies the first equality of (6). The second equality of (6) becomes ${}^\lambda{\int}_{-{\sf c}} g = {}^{1-\lambda}{\int}_{\sf c} g$ for every $g\colon {\rm X} \to [0,+\infty]$ and every ${\sf c} \in \mathscr{C}^{}$. In the paper we will present the results for the $\frac12$-integral, for simplicity, and we will briefly comment on how it works for different values of $\lambda \in [0,1]$.

2.3. Convergence of chains to curves

We recall the notion of convergence of a sequence of chains to a curve defined in [Reference Eriksson-Bique18]. Given a chain ${\sf c} = \{q_i\}_{i=0}^N$ we define the set of interpolating times as $(t_0, \ldots, t_N)$ by $t_0 = 0$ and $t_i = \frac{\ell(\{q_0,\ldots,q_i\})}{\ell({\sf c})}$. Then we define the function $\gamma_{{\sf c}}\colon [0,1] \to {\rm X}$ piecewisely defined by $[t_i,t_{i+1}) \ni t \mapsto q_i$, for $i=0,\ldots,N$. We say that a sequence of chains $\{{\sf c}_j\}_j$ converges to a curve $\gamma \colon [0,1] \to {\rm X}$ if $\{\gamma_{{\sf c}_j}\}_j$ converges uniformly to γ as j goes to $+\infty$. We have the following compactness result for complete metric spaces.

Proposition 2.2. [Reference Eriksson-Bique18, lemma 2.18]

Let $({\rm X},{\sf d})$ be a complete metric space. Let $K_j \subseteq {\rm X}$ be an increasing sequence of compact subsets of ${\rm X}$ and let $h_j(x):= \sum_{i=1}^j \min\{j{\sf d}(x,K_i),1\}$. Let $M,L,\Delta \gt 0$ be constants. Let ${\sf c}_j = \{q_0^j,\ldots,q_{N_j}^j\} \in \mathscr{C}^{\frac{1}{j}}({\rm X})$ be chains such that:

  1. (i) $\ell({\sf c}_j) \le L$ for every j;

  2. (ii) ${\rm Diam}({\sf c}_j) := \max\{{\sf d}(q,q') \, : \, q,q'\in {\sf c}_j\} \ge \Delta$ for every j;

  3. (iii) $\sum_{m=0}^{N_j-1} h_j(q_m^j){\sf d}(q_m^j,q_{m+1}^j) \le M$ for every j.

Then there exists a subsequence of $\{{\sf c}_j\}_j$ that converges to a curve $\gamma \colon [0,1] \to {\rm X}$.

Remark 2.3. Condition (iii) in proposition 2.2 is exactly ${}^1\!{\int}_{\!\!\!{\sf c}_j} h_j \le M$, for every j. The proof of [Reference Eriksson-Bique18, lemma 2.18] can be straightforwardly modified replacing (iii) with (iii) $_\lambda$ ${}^\lambda{\int}_{{\sf c}_j} h_j \le M$ for every j, for every $\lambda \in [0,1]$.

The next goal is to compare the integral of a function along a sequence of chains $\{{\sf c}_j\}_j$ converging to a curve γ with the integral of the same function on γ. To this aim, we need the following approximation of Lebesgue integral by Riemann sums. A classical reference is [Reference Doob15, pag. 63], but we adopt a strategy very close to the proof in [Reference Caputo, Gigli and Pasqualetto13, proposition 3.18].

Proposition 2.4. Let $f \colon [0,\ell] \to \mathbb{R} \cup \{+\infty\}$ be a Borel integrable function such that $f(0),f(\ell) \lt \infty$. For $n \in \mathbb{N}$ and $t\in [0,1]$ we set

\begin{equation*} \begin{aligned} R_t(f,n) := \frac{f(0) + f\left(\ell\left(\frac{t}{n}\right)\right)}{2} \cdot \ell\left(\frac{t}{n}\right) &+ \sum_{i=0}^{n-2}\frac{f\left(\ell\left(\frac{t+i}{n}\right)\right)+f\left(\ell\left(\frac{t+(i+1)}{n}\right)\right)}{2}\cdot\frac{\ell}{n} \\ &+ \frac{f\left(\ell \left(\frac{t+n-1}{n}\right)\right) + f(\ell)}{2} \cdot \ell\left(\frac{1-t}{n}\right). \end{aligned} \end{equation*}

Then

\begin{equation*}\lim_{n\to +\infty} \int_0^1 \left\vert R_t(f,n) - \int_0^\ell f(s) \,{\mathrm d} s \right\vert\,{\mathrm d} t = 0.\end{equation*}

Remark 2.5. The quantity $R_t(f,n)$ should be thought as a Riemann sum associated to the partition $0 \le \ell\left(\frac{t}{n}\right) \lt \ldots \lt \ell \left( \frac{t+n-1}{n} \right) \le \ell$ of $[0,\ell]$. The difference is in the average of f evaluated at two successive points of the partition instead of the value of f at every point of the partition. This is due to our definition of integral along chains. The statement above implies that, up to subsequence, $R_t(f,n) \to \int_0^\ell f(s)\,{\mathrm d} s$ for a.e. $t \in [0,1]$.

Proof. For a Borel integrable function $f\colon [0,\ell] \to \mathbb{R} \cup \{+\infty\}$ we define the auxiliary quantity

\begin{equation*} R_t'(f,n) := \sum_{i=0}^{n-2}\frac{f\left(\ell\left(\frac{t+i}{n}\right)\right)+f\left(\ell\left(\frac{t+(i+1)}{n}\right)\right)}{2}\cdot\frac{\ell}{n}, \end{equation*}

which is the middle term in the definition of $R_t(f,n)$. First of all we estimate

(7)\begin{equation} \begin{aligned} \int_0^1\left\vert R_t(f,n) - R_t'(f,n) \right\vert\,{\mathrm d} t &\le \frac{\ell}{n}\!\int_0^1\!\left\vert \frac{f(0) + f\left(\ell\left(\frac{t}{n}\right)\right)}{2} \right\vert\ + \left\vert \frac{f\left(\ell \left(\frac{t+n-1}{n}\right)\right) + f(\ell)}{2} \right\vert\,{\mathrm d} t \\ &\le \frac{\ell}{2n}\left( f(0) + f(\ell)\right) + \frac{1}{2}\\ &\quad\times\left(\int_0^{\frac{\ell}{n}} f(u)\,{\mathrm d} u + \int_{\ell\left(\frac{n-1}{n}\right)}^\ell f(u)\,{\mathrm d} u \right), \end{aligned} \end{equation}

and the last two terms go to 0 as n goes to $\infty$, respectively because f assume finite values at 0 and $\ell$ and dominated convergence. We now set

\begin{equation*} \begin{aligned} D(f,n):=\int_0^1 \left| R_t'(f,n) -\int_0^\ell f(s)\,{\mathrm d} s \right|\,{\mathrm d} t. \end{aligned} \end{equation*}

If $f \in C^0([0,\ell])$ we have

(8)\begin{equation} D(f,n) \le \int_0^1 \sum_{i=0}^{n-2} \int_{\ell\left(\frac{i}{n}\right)}^{\ell\left(\frac{i+1}{n}\right)} \left| \frac{f\left(\ell\left(\frac{t+i}{n}\right)\right)-f(s)}{2}\right|+\left| \frac{f\left(\ell\left(\frac{t+i+1}{n}\right)\right)-f(s)}{2}\right|\,{\mathrm d} s\,{\mathrm d} t \le \ell\varepsilon \end{equation}

for $n \gt n(f,\varepsilon)$, by uniform continuity of f on $[0,\ell]$. Finally, given two Borel functions f and f ʹ, we estimate

(9)\begin{align} \begin{aligned} |D(f,n)-D(f',n)| & \le \int_0^1 |R_t'(f,n)-R_t'(f',n)|\,{\mathrm d} t + \int_0^1 \left| \int_0^\ell (f-f')(s)\,{\mathrm d} s \right|\,{\mathrm d} t \\ &\le \int_0^1 \left| \sum_{i=0}^{n-2}\frac{(f-f')\left(\ell\left(\frac{t+i}{n}\right)\right)+(f-f')\left(\ell\left(\frac{t+(i+1)}{n}\right)\right)}{2}\cdot\frac{\ell}{n}\right|\,{\mathrm d} t\\ &\quad +\|f-f'\|_{L^1(0,\ell)}\\ & \le \frac{1}{2}\sum_{i=0}^{n-2}\int_0^1 |f-f'|\left( \ell\left(\frac{t+i}{n} \right)\right)\,{\mathrm d} t + \frac{1}{2}\sum_{i=0}^{n-2}\int_0^1 |f-f'|\\ &\quad\times \left( \ell\left(\frac{t+i}{n} \right)\right)\,{\mathrm d} t + \|f-f'\|_{L^1(0,\ell)}\\ &\le \frac{1}{2}\sum_{i=0}^{n-2}\int_{\ell(\frac{i}{n})}^{\ell(\frac{i+1}{n})} |f-f'|\,{\mathrm d} t + \frac{1}{2}\sum_{i=0}^{n-2}\int_{\ell(\frac{i+1}{n})}^{\ell(\frac{i+2}{n})} |f-f'|\,{\mathrm d} t \\ &\quad + \|f-f'\|_{{\rm L}^1(0,\ell)}\\ &\le 2 \|f-f'\|_{L^1(0,\ell)}.\\ \end{aligned}\nonumber\\ \end{align}

By approximating $f \in L^1([0,\ell])$ in L 1-norm with a sequence $\{f_j\} \subseteq C^0([0,\ell])$, applying the triangular inequality, (8) and (9) we conclude that $\varlimsup_{n \to \infty} D(f,n)=0$. This, together with (7), proves the claim.

Remark 2.6. Given $\lambda \in [0,1]$, one can prove the equivalent of proposition 2.4 for the approximation

\begin{equation*} \begin{aligned} R_t^\lambda(f,n) &:= \left[f(0), f\left(\ell\left(\frac{t}{n}\right)\right)\right]_\lambda \cdot \ell\left(\frac{t}{n}\right) + \sum_{i=0}^{n-2}\left[f\left(\ell\left(\frac{t+i}{n}\right)\right),\right.\\ & \quad \left.f\left(\ell\left(\frac{t+(i+1)}{n}\right)\right)\right]_\lambda\cdot\frac{\ell}{n} + \left[f\left(\ell \left(\frac{t+n-1}{n}\right)\right), f(\ell)\right]_\lambda \cdot \ell\left(\frac{1-t}{n}\right), \end{aligned} \end{equation*}

where $f\colon [0,\ell] \to \mathbb{R} \cup \{+\infty\}$ is Borel and integrable and such that $f(0),f(\ell) \lt \infty$. When λ = 1, this is exactly the statement of [Reference Doob15, pag.63].

We can adapt the proof of [Reference Eriksson-Bique18, lemma 2.19] to prove the following lemma.

Lemma 2.7. Let $({\rm X},{\sf d})$ be a metric space. Let $g \colon {\rm X} \to [0,+\infty]$ be a lower semicontinuous function. Let $g_j \colon {\rm X} \to [0,+\infty)$ be a sequence of continuous functions such that $g_j(x) \nearrow g(x)$ for every $x\in {\rm X}$. Let $\{{\sf c}_j\}_j$ be a sequence of chains with $\sup_j \ell({\sf c}_j) \lt \infty$ and converging to a curve $\gamma\colon [0,1] \to {\rm X}$. Then

\begin{equation*} \int_\gamma g \,{\mathrm d} s \le \varliminf_{j\to +\infty} \int_{{\sf c}_j} g_j.\end{equation*}

Proof. The proof is identical to the one of [Reference Eriksson-Bique18, lemma 2.19], where he uses the notion of λ-integral along chains with parameter λ = 1. We just need to modify it for the $\lambda = \frac12$-integral as we did for the proof of proposition 2.4 with respect to the original proof in [Reference Doob15].

On the other hand we have the next result.

Proposition 2.8. Let $({\rm X},{\sf d})$ be a metric space, let $\gamma\colon [0,L] \to {\rm X}$ be a curve parametrized by arc-length and let $g\colon {\rm X} \to [0,+\infty]$ Borel be such that $g(\alpha(\gamma)), g(\omega(\gamma)) \lt \infty$ and $\int_\gamma g \lt +\infty$. For $t\in [0,1]$ and $n\in \mathbb{N}$ define

\begin{equation*} {\sf c}_{t,n} := \left\{\gamma(0), \gamma\left(L\!\left(\frac{t}{n}\right)\!\right), \gamma\left(L\!\left(\frac{t + 1}{n}\right)\!\right), \ldots, \gamma\left(L\!\left(\frac{t + n - 1}{n}\right)\!\right), \gamma(L) \right\} \in \mathscr{C}^{\frac{L}{n}}. \end{equation*}

Then, there exists $t\in [0,1]$ and a subsequence nj such that

\begin{equation*}\int_\gamma g \ge \varlimsup_{j\to +\infty} \int_{{\sf c}_{t,n_j}}g.\end{equation*}

Proof. We apply proposition 2.4 to the function $h=g \circ \gamma$, which is Borel and integrable by assumption and satisfies $h(0),h(L) \lt \infty$. In particular, there exists $t\in [0,1]$ and a subsequence nj such that

\begin{equation*} \lim_{j\to +\infty} R_t(h,n_j) = \int_0^L h(s)\,{\mathrm d} s \end{equation*}

as noted in remark 2.5. For every j we compute

\begin{align*} R_t(h,n_j) &= \frac{h(0) + h\left(L\left(\frac{t}{n_j}\right)\right)}{2} \cdot L\left(\frac{t}{n_j}\right) + \sum_{i=0}^{n_j-2}\frac{h\left(L\left(\frac{t+i}{n_j}\right)\right)+h\left(L\left(\frac{t+(i+1)}{n_j}\right)\right)}{2}\\ &\quad \cdot\frac{L}{n_j} + \frac{h\left(L \left(\frac{t+n_j-1}{n_j}\right)\right) + h(L)}{2} \cdot L\left(\frac{1-t}{n_j}\right)\\ &= \frac{g(\gamma(0)) + g\left(\gamma\left(L\left(\frac{t}{n_j}\right)\right)\right)}{2} \cdot \ell\left(\gamma{|_{\left[0,L\left(\frac{t}{n_j}\right)\right]}}\right) \\ &+ \sum_{i=0}^{n_j-2}\frac{g\left(\gamma\left(L\left(\frac{t+i}{n_j}\right)\right)\right)+g\left(\gamma\left(L\left(\frac{t+(i+1)}{n_j}\right)\right)\right)}{2}\cdot\ell\left(\gamma{|_{\left[ L\left(\frac{t+i}{n_j}\right),L\left(\frac{t+(i+1)}{n_j}\right)\right]}}\right) \\ &+ \frac{g\left(\gamma\left(L \left(\frac{t+n_j-1}{n_j}\right)\right)\right) + g(\gamma(L))}{2} \cdot \ell\left(\gamma{|_{\left[L\left(\frac{1-t}{n_j}\right),1\right]}}\right)\\ &\ge \int_{{\sf c}_{t,n_j}} g\\ \end{align*}

where we used in the last inequality that ${\sf d}(\gamma(a),\gamma(b))\le \ell(\gamma{|_{[a,b]}})$ for every $a,b \in [0,L]$.

Remark 2.9. Combining lemma 2.7 and proposition 2.8 we get that for every lower semicontinuous $g\colon {\rm X} \to [0,+\infty]$ there exists $t\in [0,1]$ and a subsequence nj such that

\begin{equation*}\int_\gamma g = \lim_{j\to +\infty} \int_{{\sf c}_{t,n_j}}g.\end{equation*}

Remark 2.10. The proofs of lemma 2.7 and proposition 2.8 can be adapted to the case of λ-integrals along chains, for $\lambda \in [0,1]$, using remark 2.6.

3. Sobolev and BV spaces à la Cheeger and Ambrosio-Gigli-Savaré

In this section, we recall the definitions of two functionals that have been used by Cheeger ([Reference Cheeger14]) and Ambrosio-Gigli-Savaré ([Reference Ambrosio, Gigli and Savaré4, Reference Ambrosio, Gigli and Savaré5]) to define Sobolev spaces, for p > 1, and BV spaces, for p = 1, via relaxation.

Let $u\colon {\rm X} \to \mathbb{R}$ be a Borel function. A Borel function $g\colon {\rm X} \to [0,+\infty]$ is an upper gradient of u, and we write $g\in {\rm UG}^{}(u)$, if

(10)\begin{equation} \vert u(\omega(\gamma)) - u(\alpha(\gamma))\vert \le \int_\gamma g, \end{equation}

for every rectifiable curve γ.

Cheeger considered the functional

\begin{equation*} {\rm F}_{\mathrm{curve}}\colon \mathcal{L}^{p}({\rm X}) \to [0,+\infty], \quad u\mapsto \inf \left\{\Vert g \Vert_{L^p({\rm X})}\,:\, g\in {\rm UG}^{}(u) \right\}, \end{equation*}

with the usual convention that the infimum over an empty set is $+\infty$. The relaxation of ${\rm F}_{\mathrm{curve}}$ is then

\begin{equation*} \tilde{\rm F}_{{\rm curve}}(u) = \inf\left\{\varliminf_{j\to +\infty}\inf_{g\in {\rm UG}^{}(u_j)} \Vert g \Vert_{L^p({\rm X})}\, : \, u_j \underset{L^p({\rm X})}{\longrightarrow} u\right\}. \end{equation*}

We denote the associated Banach space defined as in (3) by $(H^{1,p}_{{\rm curve}}({\rm X}), \Vert \cdot \Vert_{H^{1,p}_{{\rm curve}}({\rm X})})$.

Remark 3.1. If p > 1 the space $(H^{1,p}_{{\rm curve}}({\rm X}), \Vert \cdot \Vert_{H^{1,p}_{{\rm curve}}({\rm X})})$ is isometric to the p-Newtonian-Sobolev space, see [Reference Shanmugalingam31, theorem 4.10]. Instead the space $(H^{1,1}_{{\rm curve}}({\rm X}), \Vert \cdot \Vert_{H^{1,1}_{{\rm curve}}({\rm X})})$ can be used as a possible definition of the space of BV functions (equivalent to other ones in literature when $({\rm X},{\sf d})$ is complete by [Reference Ambrosio and Di Marino2]), which generally strictly contains the 1-Newtonian-Sobolev space.

Ambrosio, Gigli and Savaré defined the functional

\begin{equation*} {\rm F}_{\rm{AGS}}\colon \mathcal{L}^{p}({\rm X}) \to [0,+\infty], \quad u\mapsto \begin{cases} \Vert {\rm lip}\,u \Vert_{L^p({\rm X})} &\text{if }u\in {\rm Lip}({\rm X});\\ +\infty &\text{otherwise} \end{cases} \end{equation*}

The relaxation of ${\rm F}_{\rm{AGS}}$ is then

\begin{equation*} \tilde{\rm F}_{{\rm AGS}}(u) = \inf\left\{\varliminf_{j\to +\infty} \Vert {\rm lip}\,u_j \Vert_{L^p({\rm X})}\, : \, u_j \in {\rm Lip}({\rm X})\ \text{and } u_j \underset{L^p({\rm X})}{\longrightarrow} u\right\}. \end{equation*}

We denote the associated Banach space defined as in (3) by $(H^{1,p}_{{\rm AGS}}({\rm X}), \Vert \cdot \Vert_{H^{1,p}_{{\rm AGS}}({\rm X})})$.

It is known that if $u\in {\rm Lip}({\rm X})$ then ${\rm lip}\,u \in {\rm UG}^{}(u)$ (see for instance [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, lemma 6.2.6]). This gives immediately that $\tilde{\rm F}_{{\rm AGS}}(u) \ge \tilde{\rm F}_{{\rm curve}}(u)$ for every $u\in L^p({\rm X})$. When the metric space $({\rm X},{\sf d})$ is complete, the several proofs of density in energy of Lipschitz functions (see [Reference Ambrosio, Gigli and Savaré4, Reference Ambrosio and Di Marino2, Reference Eriksson-Bique18, Reference Lučić and Pasqualetto30]) say that $\tilde{\rm F}_{{\rm AGS}}(u) = \tilde{\rm F}_{{\rm curve}}(u)$ for every $u\in L^p({\rm X})$. We summarize these well known results in the following proposition.

Proposition 3.2. Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space. Then $H^{1,p}_{{\rm AGS}}({\rm X}) \subseteq H^{1,p}_{{\rm curve}}({\rm X})$ with $\Vert u \Vert_{H^{1,p}_{{\rm curve}}({\rm X})} \le \Vert u \Vert_{H^{1,p}_{{\rm AGS}}({\rm X})}$ for every $u \in L^p({\rm X})$. Moreover, if $({\rm X},{\sf d})$ is complete then $H^{1,p}_{{\rm AGS}}({\rm X}) = H^{1,p}_{{\rm curve}}({\rm X})$ with $\Vert u \Vert_{H^{1,p}_{{\rm curve}}({\rm X})} = \Vert u \Vert_{H^{1,p}_{{\rm AGS}}({\rm X})}$ for every $u \in L^p({\rm X})$.

The last part of the statement cannot hold without the completeness assumption. The motivation is the following: the space $H^{1,p}_{{\rm AGS}}({\rm X})$ does not change if we take the completion of ${\rm X}$, while $H^{1,p}_{{\rm curve}}({\rm X})$ is not preserved.

Proposition 3.3. Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space and let $(\bar{{\rm X}},\bar{{\sf d}},\bar{\mathfrak m})$ be its completion. Then the identity map $\iota\colon L^p({\rm X}) \to L^p(\bar{{\rm X}})$ induces an isometry between $H^{1,p}_{{\rm AGS}}({\rm X})$ and $H^{1,p}_{{\rm AGS}}(\bar{{\rm X}})$.

Proof. Given $u \in {\rm Lip}({\rm X})$, there exists a unique extension $\bar{u} \in {\rm Lip}(\bar{{\rm X}})$. For every ɛ > 0 and $x \in {\rm X}$, we have

\begin{equation*} \sup_{y \in (B_\varepsilon(x) \cap {\rm X}) \setminus \{x\}}\frac{|u(y)-u(x)|}{{\sf d}(y,x)} =\sup_{y \in B_\varepsilon(x)\setminus \{x\}}\frac{|\bar{u}(y)-\bar{u}(x)|}{\bar{{\sf d}}(y,x)}, \end{equation*}

where the balls are in $(\bar{{\rm X}},\bar{{\sf d}})$. By denoting with a superscript the space in which the local Lipschitz constant is computed, we have that

\begin{equation*} {\rm lip}^{\rm X} u(x) ={\rm lip}^{\bar{{\rm X}}}\bar{u}(x), \end{equation*}

for every $x\in {\rm X}$. This in particular implies that

\begin{equation*} \|{\rm lip}^{\rm X} u\|_{L^p({\rm X})}= \|{\rm lip}^{\bar{{\rm X}}} \bar{u}\|_{L^p(\bar{{\rm X}})}. \end{equation*}

Moreover, if a sequence of functions $u_j \in {\rm Lip}({\rm X})$ converges to u in $L^p({\rm X})$ then the extensions $\bar{u}_j \in {\rm Lip}(\bar{{\rm X}})$ converge to $\iota(u)$ in $L^p(\bar{{\rm X}})$ as well, since $\bar{\mathfrak m}$ is concentrated on ${\rm X}$. Thus $\iota(H^{1,p}_{{\rm AGS}}({\rm X})) \subseteq H^{1,p}_{{\rm AGS}}(\bar{{\rm X}})$ and $\|\iota(u)\|_{H^{1,p}_{{\rm AGS}}(\bar{{\rm X}})} \le \| u \|_{H^{1,p}_{{\rm AGS}}({\rm X})}$. On the other hand the operator $r \colon H^{1,p}_{{\rm AGS}}(\bar{{\rm X}}) \to H^{1,p}_{{\rm AGS}}({\rm X})$ induced by the restriction from $\bar{{\rm X}}$ to ${\rm X}$ is linear, 1-Lipschitz and satisfies $r \circ \iota = \iota \circ r = \text{id}$, thus concluding the proof.

Example 3.4. In the simple example of ${\rm X} := \mathbb{R} \setminus \mathbb{Q}$ endowed with the Euclidean distance and the Lebesgue measure, we have $H^{1,p}_{{\rm AGS}}({\rm X}) \neq H^{1,p}_{{\rm curve}}({\rm X})$. Indeed from one side we have, by proposition 3.3, that $H^{1,p}_{{\rm AGS}}({\rm X}) \cong H^{1,p}_{{\rm AGS}}(\mathbb{R})$, and the latter is the classical Sobolev space on $\mathbb{R}$ for p > 1 and the classical space of functions with bounded variations on $\mathbb{R}$ for p = 1, while $H^{1,p}_{{\rm curve}}({\rm X}) \cong L^p({\rm X}) \cong L^p(\mathbb{R})$ since there are no nonconstant curves in ${\rm X}$ and so the constant function 0 is an upper gradient of every $L^p({\rm X})$ function.

Remark. Example 3.4 shows that $H^{1,p}_{{\rm curve}}({\rm X}) \neq H^{1,p}_{{\rm curve}}(\bar{{\rm X}})$ in general. The two spaces are the same if for instance the p-capacity of $\bar{{\rm X}}\setminus {\rm X}$, namely ${\rm Cap}_p(\bar{{\rm X}}\setminus {\rm X})$, is zero. For the definition of p-capacity we refer to [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, chapter 7]. The following is a (non-exhaustive) list of papers studying sufficient conditions that ensure $H^{1,p}_{{\rm curve}}({\rm X}) = H^{1,p}_{{\rm curve}}(\bar{{\rm X}})$: [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, Reference Koskela26, Reference Koskela, Shanmugalingam and Tuominen28, Reference Lahti29]. They are expressed in terms of capacity or porosity-type conditions.

4. Chain upper gradients

The goal of the next sections is to recover a description of the space $H^{1,p}_{{\rm AGS}}({\rm X})$ in the sense of proposition 3.2, even when ${\rm X}$ is not complete. This is possible if we replace upper gradients with chain upper gradients.

Let $u\colon {\rm X} \to \mathbb{R}$ and ɛ > 0. A function $g \colon {\rm X} \to [0,+\infty]$ is a ɛ-upper gradient of u, and we write $g\in {\rm UG}^{\varepsilon}(u)$, if for all ${\sf c} \in \mathscr{C}^{\varepsilon}$ it holds

(11)\begin{equation} |u(\omega({\sf c})) - u(\alpha({\sf c}))| \le \int_{\mathsf{c}} g. \end{equation}

The definition of ɛ-upper gradient is very sensitive to the value of the function at every point. Sometimes it is preferable to impose some regularity on the function. With this in mind we consider the class of Lipschitz ɛ-upper gradients of u, namely ${\rm LUG}^{\varepsilon}(u) := {\rm UG}^{\varepsilon}(u) \cap {\rm Lip}({\rm X})$.

Remark 4.1. Let $u\colon {\rm X} \to \mathbb{R}$, ɛ > 0 and $\lambda \in [0,1]$. A function $g \colon {\rm X} \to [0,+\infty]$ is a $(\varepsilon, \lambda)$-upper gradient of u, and we write $g\in {\rm UG}^{\varepsilon,\lambda}(u)$, if for all ${\sf c} \in \mathscr{C}^{\varepsilon}$ it holds

\begin{equation*} u(\omega({\sf c})) - u(\alpha({\sf c})) \le {}^{\lambda\!\!\!}{\int}_{\!\!\!\mathsf{c}} g. \end{equation*}

In the symmetric case, i.e. when $\lambda = \frac12$, this is equivalent to (11). We also set ${\rm LUG}^{\varepsilon,\lambda}(u) := {\rm UG}^{\varepsilon,\lambda}(u) \cap {\rm Lip}({\rm X})$.

Remark 4.2. The ɛ-upper gradient condition can be tested on nonconstant chains with two elements. Namely, a function $g\colon {\rm X} \to [0,+\infty]$ is a ɛ-upper gradient of $u\colon {\rm X} \to \mathbb{R}$ if and only if for every chain $\{x,y\}$, $x,y\in {\rm X}$, xy with ${\sf d}(x,y) \le \varepsilon$, it holds that

\begin{equation*}\vert u(x) - u(y) \vert \le \frac{g(x)+g(y)}{2}{\sf d}(x,y) = \int_{\{x,y\}}g.\end{equation*}

One implication is obvious. For the other one we fix a chain ${\sf c} = \{q_i\}_{i=0}^N$ and we compute

\begin{equation*}\vert u(q_N) - u(q_0) \vert \le \sum_{i=0}^{N-1} \vert u(q_i) - u(q_{i+1}) \vert \le \sum_{i=0}^{N-1} \int_{\{q_i,q_{i+1}\}} g = \int_{\sf c} g.\end{equation*}

A similar conclusion holds for $(\varepsilon,\lambda)$-upper gradients, for every $\lambda \in [0,1]$.

The next lemma shows that the slope at level ɛ is always a ɛ-upper gradient. On the other hand, the local Lipschitz constant is smaller than every upper semicontinuous (in particular every Lipschitz) ɛ-upper gradient.

Lemma 4.3. Let $({\rm X},{\sf d})$ be a metric space and let $u\colon {\rm X} \to \mathbb{R}$. Then ${\rm sl}_{\varepsilon} u \in {\rm UG}^{\varepsilon}(u)$ for every ɛ > 0. Moreover, for every $g\in {\rm UG}^{\varepsilon}(u)$ it holds ${\rm sl}_{\varepsilon'} u(x) \leq \sup_{B_{\varepsilon'}(x)} g$ for every $\varepsilon' \le \varepsilon$. Finally, if $g\in {\rm UG}^{\varepsilon}(u)$ is upper semicontinuous then ${\rm lip}\, u \le g$.

Proof. By remark 4.2 it suffices to consider ${\sf c} = \{q_0,q_1\}$. Then

\begin{equation*} \begin{aligned} |u(q_{1}) - u(q_{0})| &= \frac{1}{2}\left( \frac{\vert u(q_{1}) - u(q_{0}) \vert}{{\sf d}(q_0,q_{1})} + \frac{\vert u(q_{1}) - u(q_{0}) \vert}{{\sf d}(q_0,q_{1})} \right){\sf d}(q_0,q_{1})\\ &\leq \frac{{\rm sl}_{\varepsilon} u(q_0) + {\rm sl}_{\varepsilon} u(q_{1})}{2}{\sf d}(q_0,q_{1}) = \int_{\mathsf{c}} {\rm sl}_{\varepsilon} u. \end{aligned} \end{equation*}

This proves the first part of the statement.

We move to the second part. Fix $x \in {\rm X}$ and consider $y\in {\rm X}$ such that ${\sf d}(x,y) \le \varepsilon'$ with $\varepsilon' \le \varepsilon$. Since $g\in {\rm UG}^{\varepsilon}(u)$ and $\{x,y\} \in \mathscr{C}^{\varepsilon}$, we have

\begin{equation*} |u(x)-u(y)| \le {\sf d}(x,y)\,\frac{g(x)+g(y)}{2} \le {\sf d}(x,y)\,\sup_{B_\varepsilon(x)}g. \end{equation*}

By taking the supremum over $y \in B_{\varepsilon'}(x)$ the second conclusion follows. Taking the limit as $\varepsilon' \to 0$ and using the uppersemicontinuity of g, we conclude also the third part.

On the other hand, every Borel ɛ-upper gradient g satisfies the upper gradient inequality along every curve with endpoints in the set $\{g \lt \infty\}$.

Proposition 4.4. Let $({\rm X},{\sf d})$ be a metric space and let $u\colon {\rm X} \to \mathbb{R}$. If $g\in {\rm UG}^{\varepsilon}(u)$ for some ɛ > 0 is Borel, then

(12)\begin{equation} \vert u(\omega(\gamma)) - u(\alpha(\gamma)) \vert \le \int_\gamma g, \end{equation}

for every rectifiable curve γ such that $\omega(\gamma), \alpha(\gamma) \in \{g \lt +\infty \}$. In particular if g takes only finite values then $g \in {\rm UG}^{}(u)$.

Remark. The inequality (12) does not hold in general for curves whose endpoints belong to the set $\{g = +\infty\}$. Indeed let us consider $u=\chi_{\mathbb{Q}}$, i.e. the characteristic function of the set of rational numbers in $\mathbb{R}$. The function $g\colon \mathbb{R} \to [0,+\infty]$, $g(x) = +\infty$ if $x\in \mathbb{Q}$ and $g(x) = 0$ otherwise, belongs to ${\rm UG}^{\varepsilon}(u)$ for every ɛ > 0. However it does not belong to ${\rm UG}^{}(u)$ since $\int_\gamma g = 0$ for every rectifiable curve γ of $\mathbb{R}$.

Proof of proposition 4.4

Assume by contradiction that there exists a rectifiable curve, that we can assume parametrized by arc length $\gamma\colon [0,L] \to {\rm X}$, with $L:=\ell(\gamma)$, such that

(13)\begin{equation} \int_0^L (g\circ \gamma)(s)\,{\mathrm d} s = \int_\gamma g \lt |u(\omega(\gamma))-u(\alpha(\gamma))| \lt \infty, \end{equation}

and $g(\alpha(\gamma)), g(\omega(\gamma)) \lt \infty$. By proposition 2.8 we can find a subsequence nj such that

\begin{equation*}\int_\gamma g \ge \varliminf_{j\to +\infty} \int_{{\sf c}_{t,n_j}} g.\end{equation*}

This, together with (13), implies the existence of a $\frac{L}{n_j}$-chain ${\sf c}_{t,n_j}$ with same endpoints of γ such that

\begin{equation*}\int_{{\sf c}_{t,n_j}} g \lt |u(\omega(\gamma))-u(\alpha(\gamma))| = \vert u(\alpha({\sf c}_{t,n_j})) - u(\omega({\sf c}_{t,n_j}))\vert.\end{equation*}

This proves that $g \notin {\rm UG}^{\frac{L}{n_j}}(u)$, which is a contradiction.

Remark 4.6. The results of this section remain true if we consider the λ-integral and $(\varepsilon,\lambda)$-upper gradients, for every $\lambda \in [0,1]$. The proof of proposition 4.4 follows by remark 2.10.

5. p-weak ɛ-upper gradients

In the classical theory of Sobolev spaces, one weakens the definition of upper gradients along curves by requiring that (10) holds for $\textrm{Mod}_p$-almost every curve. The definition of the outer measure $\textrm{Mod}_p$ will be recalled in $\S$ 7. In this section, we will give a similar definition for chain upper gradients and we will show similarities and differences with the classical setting of curves.

Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space. Let ɛ > 0 and $p\ge 1$. The $(\varepsilon, p)$-modulus of a family of chains ${\sf C} \subseteq \mathscr{C}^{}$ is

\begin{equation*} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}({\sf C}):=\inf \left\{\int \rho^p\,{\mathrm d}\mathfrak m:\, \rho \in {\rm Adm}^{\varepsilon}({{\sf C}})\right\} \end{equation*}

where ${\rm Adm}^{\varepsilon}({{\sf C}})=\left\{\rho \ge 0\,:\, \rho\ \text{Borel, }\int_{{\sf c}} \rho \ge 1\ \text{for every } {\sf c} \in {{\sf C}} \cap \mathscr{C}^{\varepsilon} \right\}$.

Proposition 5.1. Let ɛ > 0 and $p\ge 1$. Then ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}$ is an outer measure.

Proof. The only non trivial property to be proven is ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\bigcup_{i=1}^\infty {\sf C}_i)\le \sum_{i=1}^\infty {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}({\sf C}_i)$ for $\{{\sf C}_i\}_{i=1}^\infty \subseteq \mathscr{C}^{}$. We assume that the right hand side is finite, otherwise there is nothing to prove. We fix δ > 0 and we choose $\{\eta_i\}_i$ such that $\sum_{i}\eta_i \le \delta$. We take $\rho_i \in {\rm Adm}^{\varepsilon}({\sf C}_i)$ such that $\int \rho_i^p \,{\mathrm d} \mathfrak m \le {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}({\sf C}_i) + \eta_i$. The function $\rho:=(\sum_{i=1}^\infty (\rho_i)^p)^{\frac{1}{p}}$ satisfies $\rho \in {\rm Adm}^{\varepsilon}(\bigcup_{i=1}^\infty {\sf C}_i)$. Therefore

\begin{equation*} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}\left(\bigcup_{i=1}^\infty {\sf C}_i\right)\le \int \rho^p\,{\mathrm d} \mathfrak m \le \sum_{i=1}^\infty \int \rho_i^p\,{\mathrm d} \mathfrak m \le \sum_{i=1}^\infty {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}({\sf C}_i) +\delta. \end{equation*}

Taking the limit as δ converges to 0, we get the thesis.

Remark. The $(\varepsilon, p)$-modulus is concentrated on ɛ-chains in the following sense. Let ${\sf C} \subseteq \mathscr{C}^{}$ and let ${\sf C}^\varepsilon := {\sf C} \cap \mathscr{C}^{\varepsilon}$. Then ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}({\sf C} \setminus {\sf C}^\varepsilon) = 0$ since the function ρ = 0 belongs to ${\rm Adm}^{\varepsilon}({{\sf C}}\setminus {{\sf C}}^\varepsilon)$.

Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space. Given a function $u \colon {\rm X} \to \mathbb{R}$, we say that a function $g \colon {\rm X} \to [0,+\infty]$ is a p-weak ɛ-upper gradient of u and we write $g \in {\rm WUG}_{p}^{\varepsilon}(u)$ if

\begin{equation*} \vert u(\omega({\sf c}))-u(\alpha({\sf c})) \vert \le \int_{{\sf c}} g \qquad \text{for }{\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}\text{-a.e. chain.} \end{equation*}

In particular if $g \in {\rm UG}^{\varepsilon}(u)$ then $g\in {\rm WUG}_{p}^{\varepsilon}(u)$ for every $p\ge 1$.

Remark. For $\lambda \in [0,1]$ one can define the $(\varepsilon, \lambda, p)$-modulus of a family of chains ${\sf C} \subseteq \mathscr{C}^{}$ by

\begin{equation*} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon,\lambda}({\sf C}):=\inf \left\{\int \rho^p\,{\mathrm d}\mathfrak m:\, \rho \in {\rm Adm}^{\varepsilon, \lambda}({{\sf C}})\right\}, \end{equation*}

where ${\rm Adm}^{\varepsilon, \lambda}({{\sf C}})=\left\{\rho \ge 0\,:\, \rho\ \text{Borel, }{}^\lambda\!{\int}_{{\!\sf c}} \rho \ge 1\ \text{for every } {\sf c} \in {{\sf C}} \cap \mathscr{C}^{\varepsilon} \right\}$. This is still an outer measure which is concentrated on ɛ-chains. A function $g \colon {\rm X} \to [0,+\infty]$ is a p-weak $(\varepsilon, \lambda)$-upper gradient of $u\colon {\rm X} \to \mathbb{R}$ if

\begin{equation*} u(\omega({\sf c}))-u(\alpha({\sf c})) \le {\vphantom{\int}}^\lambda{\!\!\!}{\int}_{{\!\!\!{\sf c}}} g \qquad \text{for }{\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon,\lambda}\text{-a.e. chain.} \end{equation*}

The set of p-weak $(\varepsilon, \lambda)$-upper gradient of u is denoted by ${\rm WUG}_{p}^{\varepsilon, \lambda}(u)$. It holds ${\rm UG}^{\varepsilon, \lambda}(u) \subseteq {\rm WUG}_{p}^{\varepsilon, \lambda}(u)$.

The set of p-integrable p-weak ɛ-upper gradients is closed under $L^p({\rm X})$-convergence. This can be seen as a consequence of an appropriate version of Fuglede’s Lemma in this context.

Proposition 5.4. Fuglede’s lemma for chains

Let gj be a sequence of $\mathfrak m$-measurable functions that converges in $L^p({\rm X})$. Then there is a subsequence $g_{j_k}$ with the following property: if g is any $\mathfrak m$-measurable representative of the $L^p({\rm X})$-limit of gj then

\begin{equation*}\lim_{k\to +\infty} \int_{{\sf c}} \vert g_{j_k} - g\vert = 0\end{equation*}

for ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}$-a.e. chain.

In the chain case this result is easier to prove and it is a consequence of the following easy but important fact.

Lemma 5.5. Let $E\subseteq {\rm X}$, ɛ > 0, $p\ge 1$. If $\mathfrak m(E) = 0$ then ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}(E)) = 0$.

Proof. We write $\mathscr{C}^{}(E) = \bigcup_{k\in \mathbb{N}} {\sf C}_k$, where

\begin{equation*}{\sf C}_k := \left\{{\sf c} = \{q_i\}_{i=0}^N \in \mathscr{C}^{}(E) \,:\, \min_{0\le i \lt N} {\sf d}(q_i,q_{i+1}) \geq \frac{1}{k} \right\}.\end{equation*}

By proposition 5.1, ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}$ is an outer measure. So it is enough to prove that ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}({\sf C}_k) = 0$ for every k. The function $\rho = 2k\cdot \chi_E$ belongs to Adm $^{\varepsilon}({\sf C}_k)$. Therefore ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}({\sf C}_k) \le \int \rho^p\,{\mathrm d}\mathfrak m = (2k)^p\mathfrak m(E)=0$.

Remark. The previous lemma differs to the classical case in which the modulus is defined in terms of rectifiable curves. Given a Borel set E with $\mathfrak m(E)=0$, ${\rm Mod}_p(\Gamma(E))=0$ if and only if ${\rm Cap}_p(E)=0$, see [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, proposition 7.2.8], where $\Gamma(E)$ denotes the family of curves intersecting E. In other words, given a $\mathfrak m$-null set with positive capacity, the p-modulus of the curves hitting this set is positive. This does not happen in the case of chains, as lemma 5.5 shows. On the other hand, the same proof of lemma 5.5 shows in the case of curves that the p-modulus of the set of curves spending a positive time in E is zero, if $\mathfrak m(E) = 0$. In this case ${\rm Mod}_p(\Gamma(E))$ is concentrated on the family of curves spending time zero in E (see also [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, lemma 5.2.15]).

Remark 5.7. Lemma 5.5 implies that if $g \in {\rm WUG}_{p}^{\varepsilon}(u)$ is $\mathfrak m$-measurable and h is another $\mathfrak m$-measurable function such that $\mathfrak m(\{g \neq h\})=0$, then $h \in {\rm WUG}_{p}^{\varepsilon}(u)$ as well.

Proof of proposition 5.4

Since gj converges to g in $L^p({\rm X})$ then we can find a subsequence $g_{j_k}$ which converges to g pointwise almost everywhere. This means that we can find a set $E \subseteq {\rm X}$ with $\mathfrak m(E) = 0$ such that $\lim_{k\to +\infty} g_{j_k}(x) = g(x)$ for every $x\in {\rm X} \setminus E$. Let us consider the set of chains $\mathscr{C}^{}(E)$ which has $(\varepsilon, p)$-modulus 0 by lemma 5.5. We claim that for every chain which is not in $\mathscr{C}^{}(E)$ we have $\lim_{k\to +\infty} \int_{\sf c} \vert g_{j_k} - g \vert = 0$. Let us fix a chain ${\sf c} = \{q_i\}_{i=0}^N$ which is not in $\mathscr{C}^{}(E)$. Then $q_i \notin E$ for every $0\le i \le N$. In particular $\lim_{k\to +\infty}g_{j_k}(q_i) = g(q_i)$ for $0\le i \le N$. Therefore

\begin{align*}\lim_{k\to +\infty} \int_{\sf c} \vert g_{j_k} - g \vert &= \lim_{k\to +\infty} \sum_{i=0}^{N-1} \frac{\vert g_{j_k}(q_i) - g(q_i) \vert + \vert g_{j_k}(q_{i+1}) - g(q_{i+1})\vert}{2}{\sf d}(q_i,q_{i+1})\\ & = 0.\end{align*}

Remark 5.8. Lemma 5.5 is true for ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon,\lambda}$, for every $\lambda\in(0,1)$, with the same proof. An alternative argument is to observe that ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon,\lambda}$ and ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon,\lambda'}$ are mutually absolutely continuous if $\lambda,\lambda'\in (0,1)$. However, if $\lambda \in \{0,1\}$, lemma 5.5 is no more true. Indeed, if $\mathbb{Q}$ is the set of rational numbers in $\mathbb{R}$, then ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon,\lambda}(\mathscr{C}^{}(\mathbb{Q})) = +\infty$ for $\lambda \in \{0,1\}$. This follows from the fact that every admissible function has to be equal to $+\infty$ on $\mathbb{R}\setminus \mathbb{Q}$. On the other hand, the same proof as above shows that given $E\subseteq {\rm X}$ such that $\mathfrak m(E)=0$, then

\begin{equation*}{\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon,1}(\{{\sf c} = \{q_i\}_{i=0}^N \in \mathscr{C}^{}\, : \, q_i \in E \text{for some } i=0, \ldots, N-1\}) = 0\end{equation*}

and

\begin{equation*}{\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon,0}(\{{\sf c} = \{q_i\}_{i=0}^N \in \mathscr{C}^{}\, : \, q_i \in E \text{for some } i=1, \ldots, N\}) = 0.\end{equation*}

This is enough for adapting the proof of proposition 5.4 to $\lambda \in \{0,1\}$.

As a consequence we prove that the set ${\rm WUG}_{p}^{\varepsilon}(u) \cap L^p({\rm X})$ is closed in $L^p({\rm X})$. Actually a stronger statement, that is the chain counterpart of [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, proposition 6.3.30], holds. Notice that in [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, proposition 6.3.30] is required the convergence of uj to u Capp-a.e. while in our result it is enough to consider convergence $\mathfrak m$-a.e.

Proposition 5.9. Let $u_j \to u$ pointwise $\mathfrak m$-almost everywhere, let $g_j \in {\rm WUG}_{p}^{\varepsilon}(u_j)$ and suppose $g_j \to g$ in $L^p({\rm X})$. Then $g \in {\rm WUG}_{p}^{\varepsilon}(u)$.

Proof. Let E be the set of points of ${\rm X}$ where the convergence of uj to u does not hold. By proposition 5.4 we can extract a further subsequence, not relabeled, and a set of chains ${\sf C}$ such that $\lim_{j\to +\infty} \int_{\sf c} g_j = \int_{\sf c} g$ for every ${\sf c} \in {\sf C}$ and with ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}\setminus {\sf C}) = 0$. Since $g_j \in {\rm WUG}_{p}^{\varepsilon}(u_j)$ we can find set of chains ${{\sf C}}_j$ such that $|u_j(\omega({\sf c})) - u_j(\alpha({\sf c}))| \leq \int_{\sf c} g_j$ for every ${\sf c} \in {{\sf C}}_j$ and such that ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{} \setminus {{\sf C}}_j) = 0$. The set of chains ${{\sf C}}' = \left({{\sf C}} \cap \bigcap_{j \in \mathbb{N}} {{\sf C}}_j\right) \setminus \mathscr{C}^{}(E)$ still satisfies ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{} \setminus {\sf C}') = 0$, because of proposition 5.1, since $\mathscr{C}^{}\setminus {{\sf C}}' = \mathscr{C}^{}(E) \cup (\mathscr{C}^{} \setminus {{\sf C}}) \cup \bigcup_j (\mathscr{C}^{} \setminus {{\sf C}}_j)$.

For every ${\sf c} \in {{\sf C}}'$ we have

\begin{equation*}|u(\omega({\sf c})) - u(\alpha({\sf c}))| = \lim_{j\to +\infty} |u_j(\omega({\sf c})) - u_j(\alpha({\sf c}))| \le \lim_{j\to +\infty} \int_{\sf c} g_j = \int_{\sf c} g.\end{equation*}

This shows that $g\in {\rm WUG}_{p}^{\varepsilon}(u)$.

Remark. Proposition 5.9 remains true for every $\lambda \in (0,1)$, while it is not clear if it holds for $\lambda \in \{0,1\}$. However it is still true, and the proof is the same, that ${\rm WUG}_{p}^{\varepsilon,1}(u)$ and ${\rm WUG}_{p}^{\varepsilon,0}(u)$ are closed with respect to the $L^p({\rm X})$-topology.

As a consequence we can find a p-weak ɛ-upper gradient of minimal norm.

Proposition 5.11. The set ${\rm WUG}_{p}^{\varepsilon}(u) \cap L^p({\rm X})$ is a closed, convex subset of $L^p({\rm X})$. If not empty, it contains an element of minimal $L^p({\rm X})$-norm. If p > 1 such element is unique.

Proof. We already showed in proposition 5.9 that ${\rm WUG}_{p}^{\varepsilon}(u)$ is closed, while its convexity is trivial. The existence of an element of minimal norm, i.e. the existence of a projection of $0 \in L^p({\rm X})$ on ${\rm WUG}_{p}^{\varepsilon}(u)$, follows directly. The uniqueness statement for p > 1 is a consequence of the strict convexity of the norm of $L^p({\rm X})$ for p > 1.

On the other hand the minimal norm can be computed also using true ɛ-upper gradients because of the next result.

Proposition 5.12. The set ${\rm WUG}_{p}^{\varepsilon}(u) \cap L^p({\rm X})$ is the $L^p({\rm X})$-closure of ${\rm UG}^{\varepsilon}(u) \cap L^p({\rm X})$.

Proof. By proposition 5.9 we know that ${\rm WUG}_{p}^{\varepsilon}(u) \cap L^p({\rm X})$ is closed in $L^p({\rm X})$ and therefore it contains the $L^p({\rm X})$-closure of ${\rm UG}^{\varepsilon}(u) \cap L^p({\rm X})$. Let $g\in {\rm WUG}_{p}^{\varepsilon}(u)$. Let ${\sf C}$ be a family of chains such that ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}\setminus {\sf C}) = 0$ and such that $|u(\omega({\sf c})) - u(\alpha({\sf c}))| \le \int_{\sf c} g$ for every ${\sf c} \in {\sf C}$. By definition, for every $j\ge 1$ there exists an admissible map $\rho_j \in \text{Adm}^\varepsilon(\mathscr{C}^{}\setminus {\sf C})$ such that $\int \rho_j^p\,{\mathrm d}\mathfrak m \le 2^{-jp}$. We set $\rho = \left(\sum_{j\ge 1} \rho_j^p\right)^{\frac{1}{p}} \in \text{Adm}^\varepsilon(\mathscr{C}^{}\setminus \sf C)$, because $\rho \ge \rho_j$ for every j. Moreover $\rho \in L^p({\rm X})$ and $\int_{\sf c}\rho = \infty$ for every ${\sf c} \in \mathscr{C}^{}\setminus {\sf C}$. Now, for every $k\in \mathbb{N}$, we define the function $g_k := g + 2^{-k}\rho$. It is easy to check that gk converges to g in $L^p({\rm X})$ and that $g_k \in {\rm UG}^{\varepsilon}(u)$.

Example 5.13. Non uniqueness of minimal weak chain upper gradient

If p = 1 it can happen that there is more than one element of minimal norm in ${\rm WUG}_{1}^{\varepsilon}(u)$. We now produce an example of a metric measure space $({\rm X},{\sf d},\mathfrak m)$ and a function $u\colon {\rm X} \to \mathbb{R}$ such that for every $0 \lt \varepsilon \le \frac{1}{3}$ there are infinitely many elements of minimal norm in ${\rm WUG}_{1}^{\varepsilon}(u)$.

We define the following two sequences of real numbers: $x_n = n$, $y_n = n + \frac{1}{n}$, for $n\ge 3$. Let ${\rm X}$ be the countable set ${\rm X} := \bigcup_{n \ge 3} \{x_n, y_n\}\subset \mathbb{R}$. We endow ${\rm X}$ with the Euclidean distance and with the reference measure $\mathfrak m := \sum_{n \ge 3} \frac{1}{n^3} \left( \delta_{x_n}+ \delta_{y_n} \right)$. Notice that ${\rm X}$ is complete. We define the function $u \colon {\rm X} \to \mathbb{R}$ as $u(x_n)=1$ and $u(y_n)=0$ for every $n \ge 3$. We fix $0 \lt \varepsilon\le \frac{1}{3}$ and we notice that all the possible nonconstant ɛ-chains with two elements are of the form $\{x_n,y_n\}$ and $\{y_n,x_n\}$, for $n\ge \varepsilon^{-1}$. Therefore, by remark 4.2, a function $g\colon {\rm X} \to [0,+\infty]$ is a ɛ-upper gradient if and only if

\begin{equation*} g(x_n) + g(y_n) \ge 2n \quad \forall n \ge \varepsilon^{-1}. \end{equation*}

Hence, its $L^1(\mathfrak m)$ norm satisfies the following lower bound

\begin{equation*} \|g\|_{L^1(\mathfrak m)} = \sum_{n \ge 3} \frac{g(x_n)+g(y_n)}{n^3} \ge 2\sum_{n \ge \varepsilon^{-1}} \frac{1}{n^2} =:L_\varepsilon. \end{equation*}

For every $\mu \in [0,1]$ the function $g_{\mu,\varepsilon}$ defined as

\begin{equation*} g_{\mu,\varepsilon}(x) := \begin{cases} 2n\mu & x=x_n \text{for }n \ge \varepsilon^{-1}\\ 2n(1-\mu) & x=y_n \text{for }n \ge \varepsilon^{-1}\\ 0 & \text{otherwise} \end{cases} \end{equation*}

we have $\|g_{\mu,\varepsilon}\|_{L^1(\mathfrak m)}=L_\varepsilon$, therefore they are all ɛ-chain upper gradients of minimal L 1-norm, by proposition 5.12.

Remark. Every result of this section extends verbatim to the case $\lambda \in (0,1)$, while some differences appear in case $\lambda \in \{0,1\}$. For simplicity we state the results for λ = 1, the case λ = 0 being analogous. We have already noticed that we do not know if proposition 5.9 holds for λ = 1, but in any case ${\rm WUG}_{p}^{\varepsilon,1}(u)$ is closed. Moreover it is possible to show, adapting verbatim the proof of [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, lemma 6.3.8], that ${\rm WUG}_{p}^{\varepsilon,1}(u)$ is a lattice. As a consequence there exists a minimal p-weak $(\varepsilon,1)$-upper gradient gu of u in the following sense: if $g\in {\rm WUG}_{p}^{\varepsilon, 1}(u)$ is $\mathfrak m$-measurable then $g_u \le g$ $\mathfrak m$-a.e. For the function u in example 5.13 and p = 1, then gu is equal to

\begin{equation*} g_{u}(x) := \begin{cases} n & x=x_n\ \text{for }n \ge \varepsilon^{-1}\\ 0 & \text{otherwise.} \end{cases} \end{equation*}

The unique element of minimal norm in ${\rm WUG}_{1}^{\varepsilon, 0}(u)$ is instead

\begin{equation*} g_u(x) := \begin{cases} n & x=y_n \text{for }n \ge \varepsilon^{-1}\\ 0 & \text{otherwise.} \end{cases} \end{equation*}

6. The chain Sobolev spaces

In this section, we introduce two new functionals and we study their relaxations. The two functionals are

\begin{equation*} \begin{aligned} {\rm F}_{\mathscr{C}^{}} \colon \mathcal{L}^{p}({\rm X}) \to [0,+\infty], \quad &u \mapsto \lim_{\varepsilon \to 0} \inf\left\{\| g \|_{L^p({\rm X})}\,:\, g\in {\rm UG}^{\varepsilon}(u),\,\right.\\ &\qquad\qquad\qquad\left. g\ \text{is } \mathfrak m\text{-measurable}\right\},\\ {\rm F}_{\mathscr{C}^{},\, {\rm Lip}} \colon \mathcal{L}^{p}({\rm X}) \to [0,+\infty], \quad &u \mapsto \begin{cases} \lim_{\varepsilon \to 0} \inf\left\{\| g \|_{L^p({\rm X})}\,:\,\right.& \text{if } u\in {\rm Lip}({\rm X}),\\ \quad \left.g\in {\rm LUG}^{\varepsilon}(u)\right\} &\\ +\infty & \text{otherwise}, \end{cases} \end{aligned} \end{equation*}

where the infimum of an empty set is $+\infty$. By proposition 5.12, ${\rm F}_{\mathscr{C}^{}}$ can be equivalently defined by ${\rm F}_{\mathscr{C}^{}}(u) = \lim_{\varepsilon \to 0} \inf\left\{\| g \|_{L^p({\rm X})}\,:\, g\in {\rm WUG}_{p}^{\varepsilon}(u),\, g \text{is } \mathfrak m\text{-measurable}\right\}$. Moreover, one obtains the same quantity considering the infimum among Borel functions instead of $\mathfrak m$-measurable ones, because of the Vitali-Carathéodory’s Theorem (cp. [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, p.108]). The limits in the definitions exist because the arguments are decreasing functions. Indeed if $\varepsilon' \le \varepsilon$ then every ɛ-upper gradient is also a $\varepsilon'$-upper gradient. The two functionals satisfy properties (a),(b) and (c) of section 2.1. The less trivial property, which is (c), is consequence of the symmetric property in (6) that implies that ${\rm UG}^{\varepsilon}(u) = {\rm UG}^{\varepsilon}(-u)$ for every Borel function $u\colon {\rm X} \to \mathbb{R}$.

The relaxations of the functionals above are denoted respectively by $\tilde{\rm F}_{{\rm \mathscr{C}^{}}}$ and $\tilde{\rm F}_{{\rm \mathscr{C}^{}, \,{\rm Lip}}}$. The associated Banach spaces are respectively $(H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X}), \Vert \cdot \Vert_{H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})})$ and $(H^{1,p}_{{\rm \mathscr{C}^{}, \, {\rm Lip}}}({\rm X}), \Vert \cdot \Vert_{H^{1,p}_{{\rm \mathscr{C}^{}, \, {\rm Lip}}}({\rm X})})$. Since ${\rm F}_{\mathscr{C}^{}, \, {\rm Lip}}(u) \ge {\rm F}_{\mathscr{C}^{}}(u)$ for every $u\in \mathcal{L}^{p}({\rm X})$, we have that

\begin{equation*}H^{1,p}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}({\rm X}) \subseteq H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X}),\end{equation*}

and the inclusion is 1-Lipschitz.

Remark. The domain of ${\rm F}_{\mathscr{C}^{}}$, namely the set of functions in $\mathcal{L}^{p}({\rm X})$ that admits an $L^p({\rm X})$-integrable ɛ-upper gradient for some ɛ > 0, is clearly larger than the domain of ${\rm F}_{\mathscr{C}^{},\,{\rm Lip}}$, but it is also bigger than the set of functions (not necessarily Lipschitz) that admit p-integrable, Lipschitz, ɛ-upper gradients for some ɛ > 0. Indeed, it contains functions that are highly non-regular, as the next example shows. Let $u = \chi_{\mathbb{Q}}$ be the characteristic function of the rational numbers on $\mathbb{R}$. It can be readily checked that ${\rm LUG}^{\varepsilon}(u) = \emptyset$, thus it does not admit a p-integrable, Lipschitz, ɛ-upper gradient for every ɛ > 0, while $g\colon {\rm X} \to [0,+\infty]$, $g = \infty \cdot \chi_{\mathbb{Q}}$ belongs to ${\rm UG}^{\varepsilon}(u)$. Since $\Vert g \Vert_{L^p(\mathbb{R})} = 0$ we have that ${\rm F}_{\mathscr{C}^{}}(u) = 0$.

In order to familiarize with the definition of ${\rm F}_{\mathscr{C}^{}}$ and $H^{1,p}_{{\rm \mathscr{C}^{}}}$ we compute explicitly this space in the case of a snowflake of a metric measure space. In view of theorem 1.1 and the well known fact that there are no rectifiable curves in such spaces, hence $H^{1,p}_{{\rm curve}}({\rm X}) = L^p({\rm X})$, we know that it must hold that $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X}) = L^p({\rm X})$ too.

Example 6.2. Snowflaking of a metric space $({\rm X},{\sf d})$

Let $({\rm X},{\sf d})$ be a metric space and $0 \lt \alpha \lt 1$. We consider $({\rm X},{\sf d}^\alpha)$, where ${\sf d}^\alpha(x,y):=({\sf d}(x,y))^\alpha$. Let $\mathfrak m$ be a Borel measure on $({\rm X},{\sf d})$ (so a Borel measure on $({\rm X},{\sf d}^\alpha)$ too) and consider ɛ > 0. Notice that ${\sf c} \in \mathscr{C}^{\varepsilon}({\rm X},{\sf d})$ if and only if ${\sf c} \in \mathscr{C}^{\varepsilon^\alpha}({\rm X},{\sf d}^\alpha)$. We claim that, if g is a ɛ-upper gradient of u on $({\rm X},{\sf d})$, then $\varepsilon^{1-\alpha} g$ is a ɛ α-upper gradient of u on $({\rm X},{\sf d}^\alpha)$. Indeed, for a chain ${\sf c}=\{q_i\}_{i=0}^N$ such that ${\sf d}(q_i,q_{i+1})\le \varepsilon$ we have

\begin{align*} |u(\omega({\sf c}))-u(\alpha({\sf c}))| & \le \sum_i \frac{g(q_i) + g(q_{i+1})}{2}\,{\sf d}(q_i,q_{i+1}) \\ & = \sum_i \frac{g(q_i) + g(q_{i+1})}{2}\,{\sf d}^\alpha(q_i,q_{i+1})\,{\sf d}^{1-\alpha}(q_i,q_{i+1}) \\ &\le \sum_i \frac{\varepsilon^{1-\alpha}g(q_i) + \varepsilon^{1-\alpha}g(q_{i+1})}{2}\,{\sf d}^\alpha(q_i,q_{i+1}). \end{align*}

Therefore, for every $u\in L^p({\rm X})$, we have

\begin{equation*} \begin{aligned} {\rm F}_{\mathscr{C}^{}}^{{\rm X},{\sf d}^\alpha,\mathfrak m}(u) &= \lim_{\varepsilon \to 0} \inf\left\{\Vert g \Vert_{L^p({\rm X})}\,:\, g\ \text{is a } \varepsilon^\alpha\text{-upper gradient of } u\ \text{in } ({\rm X},{\sf d}^\alpha) \right\} \\ &\le \lim_{\varepsilon \to 0} \varepsilon^{1-\alpha}\inf\left\{\Vert g \Vert_{L^p({\rm X})}\,:\, g\ \text{is a } \varepsilon\text{-upper gradient of } u \ \text{in }\ ({\rm X},{\sf d}) \right\}. \end{aligned} \end{equation*}

In particular, since every function $u\in {\rm Lip}({\rm X},{\sf d})$ with bounded support has a chain upper gradient, namely ${\rm sl}_{\varepsilon}(u)$, which is in $L^p({\rm X})$, then ${\rm F}_{\mathscr{C}^{}}^{{\rm X},{\sf d}^\alpha,\mathfrak m}(u)= 0$ for all such functions. Therefore $\tilde{\rm F}_{{\rm \mathscr{C}^{}}}^{{\rm X},{\sf d}^\alpha,\mathfrak m}(u) = 0$ for every $u\in L^p({\rm X})$ since the class of Lipschitz functions (w.r.t. ${\sf d}$) with bounded support is dense in $L^p({\rm X})$. Thus $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X},{\sf d}^\alpha,\mathfrak m) = L^p({\rm X})$.

6.1. Proof of the main results

The goal of this section is to compare the space $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})$ with $H^{1,p}_{{\rm curve}}({\rm X})$ and $H^{1,p}_{{\rm AGS}}({\rm X})$, with the help of $H^{1,p}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}({\rm X})$.

Proposition 6.3. Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space. Then

\begin{equation*} H^{1,p}_{{\rm \mathscr{C}^{},\, {\rm Lip}}}({\rm X}) \subseteq H^{1,p}_{{\rm curve}}({\rm X}) \end{equation*}

and

\begin{equation*} \|u \|_{H^{1,p}_{{\rm curve}}({\rm X})} \le \|u \|_{H^{1,p}_{{\rm \mathscr{C}^{}, {\rm Lip}}}({\rm X})} \end{equation*}

for every $u\in L^p({\rm X})$.

Proof. Proposition 4.4 implies that ${\rm F}_{\mathrm{curve}}(u) \le {\rm F}_{\mathscr{C}^{},\,{\rm Lip}}(u)$ for every $u\in \mathcal{L}^{p}({\rm X})$ and this concludes the proof.

Theorem 6.4. Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space such that $({\rm X},{\sf d})$ is complete. Then

\begin{equation*} H^{1,p}_{{\rm \mathscr{C}^{},\, {\rm Lip}}}({\rm X}) = H^{1,p}_{{\rm curve}}({\rm X}), \end{equation*}

and

\begin{equation*} \|u \|_{H^{1,p}_{{\rm \mathscr{C}^{},\, {\rm Lip}}}({\rm X})} = \|u \|_{H^{1,p}_{{\rm curve}}({\rm X})}, \end{equation*}

for every $u\in L^p({\rm X})$.

Remark 6.5. The proof of theorem 6.4 can be done following word by word the proof of [Reference Eriksson-Bique18, theorem 1.1], with very few modifications, like the obvious one due to our definition of integral along chains, that requires lemma 2.7 in place of [Reference Eriksson-Bique18, lemma 2.19]. However, we are not able to use this scheme of demonstration in order to prove the next theorem 6.7. For this reason we propose a proof of theorem 6.4 which is still inspired to the one of [Reference Eriksson-Bique18, theorem 1.1], but that can be easily modified to prove theorem 6.7. The main difference relies on the simplifications procedures: while we are able to reduce the proof to bounded functions with bounded support, in theorem 6.7 we are not able to restrict the study to nonnegative functions. This is due to the fact that the p-minimal ɛ-weak upper gradients are not local in any suitable sense. In particular is not clear how to show that $\tilde{\rm F}_{{\rm \mathscr{C}^{}}}(u) = \tilde{\rm F}_{{\rm \mathscr{C}^{}}}(u_+) + \tilde{\rm F}_{{\rm \mathscr{C}^{}}}(u_-)$, where $u_+$ and $u_-$ are the positive and negative part of u. Notice that, a posteriori, this has to be true because of theorems 6.7 and 6.4, since $\tilde{\rm F}_{{\rm curve}}(u) = \tilde{\rm F}_{{\rm curve}}(u_+) + \tilde{\rm F}_{{\rm curve}}(u_-)$.

Proof of theorem 6.4

For simplicity we divide the proof in steps.

Step 1. It is enough to prove that for every bounded function u with bounded support it holds $\tilde{\rm F}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}(u) \le {\rm F}_{\mathrm{curve}}(u)$. Indeed, by [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, proposition 7.1.35], for every $u\in L^p({\rm X})$ such that ${\rm F}_{\mathrm{curve}}(u) \lt \infty$ we can find a sequence of bounded functions uj with bounded support such that $u_j \to u$ in $L^p({\rm X})$ and $\varliminf_{j\to +\infty}{\rm F}_{\mathrm{curve}}(u_j) \le {\rm F}_{\mathrm{curve}}(u)$. Therefore

\begin{equation*}\tilde{\rm F}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}(u) \le \varliminf_{j\to +\infty}\tilde{\rm F}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}(u_j) \le \varliminf_{j\to +\infty}{\rm F}_{\mathrm{curve}}(u_j) \le {\rm F}_{\mathrm{curve}}(u),\end{equation*}

because of the lower semicontinuity of $\tilde{\rm F}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}$. Since $\tilde{\rm F}_{{\rm curve}}$ is the biggest lower semicontinuous functional which is smaller than or equal to ${\rm F}_{\mathrm{curve}}$, we infer that $\tilde{\rm F}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}(u) \le \tilde{\rm F}_{{\rm curve}}(u)$ for every $u\in L^p({\rm X})$ such that ${\rm F}_{\mathrm{curve}}(u) \lt \infty$. The thesis for an arbitrary $u\in L^p({\rm X})$ follows directly by the definitions of $\tilde{\rm F}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}(u)$ and $\tilde{\rm F}_{{\rm curve}}(u)$.

In the following we fix $x_0\in {\rm X}$ and we assume that $u\colon {\rm X} \to [-M,M]$ and that there exists $R\ge 3$ such that $u{|_{{\rm X}\setminus B_R(x_0)}} = 0$.

Step 2. We claim that it is enough to prove the following statement. For every Borel upper gradient $g\in {\rm UG}^{}(u)$ and for every η > 0 there exists another Borel upper gradient $g_\eta \in {\rm UG}^{}(u)$ such that

(14)\begin{equation} \Vert g - g_\eta \Vert_{L^p({\rm X})} \lt \eta, \end{equation}

and with the following property. For every $j\in \mathbb{N}$ there exist functions $u_{\eta,j} \colon {\rm X} \to \mathbb{R}$ and $g_{\eta,j}\in {\rm LUG}^{\frac{1}{j}}(u_{\eta,j})$ such that

(15)\begin{equation} \varlimsup_{j\to +\infty} \Vert u_{\eta,j} - u \Vert_{L^p({\rm X})} \le \eta \quad \text{and} \quad \lim_{j\to +\infty} \Vert g_{\eta,j} - g_\eta \Vert_{L^p({\rm X})} = 0, \end{equation}

Indeed, if the claim is true, then we have

\begin{equation*} \varliminf_{j\to +\infty}{\rm F}_{\mathscr{C}^{},\, {\rm Lip}}(u_{\eta,j}) \le \varliminf_{j\to +\infty} \Vert g_{\eta,j}\Vert_{L^p({\rm X})}= \Vert g_{\eta}\Vert_{L^p({\rm X})} \le \Vert g\Vert_{L^p({\rm X})} + \eta, \end{equation*}

for every η > 0, where we used (14) and (15). By a diagonal argument we deduce that $\tilde{\rm F}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}(u)\le \Vert g\Vert_{L^p({\rm X})}$. By the arbitrariness of $g\in {\rm UG}^{}(u)$ we infer that $\tilde{\rm F}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}(u) \le {\rm F}_{\mathrm{curve}}(u)$, which is the statement we had to prove from Step 1.

In the remaining steps we will prove the claim of Step 2. In order to simplify the proof we notice that it is enough to prove the statement for upper gradients $g\in {\rm UG}^{}(u)$ that are lower semicontinuous and such that $g \equiv 0$ on ${\rm X}\setminus B_{2R}(x_0)$. The first assertion follows by Vitali-Carathéodory Theorem (cp. [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, page 108]), while the second one follows by truncation: for every $g\in {\rm UG}^{}(u)$, the truncated function $g\cdot \chi_{B_{2R}(x_0)}$ is still an upper gradient of u since $u{|_{{\rm X}\setminus B_R(x_0)}} = 0$ and it is smaller than the original one. In the sequel we assume that g has these properties.

Step 3: For every g as above, we define gη. By Lusin’s Theorem and the fact that $\mathfrak m$ is Radon, we can find compact sets $K_j \subseteq B_{2R}(x_0)$ such that $\mathfrak m(B_{2R}(x_0) \setminus K_j)^{\frac{1}{p}} \le 2^{-j}\eta$ and so that $u{|_{K_j}}$ is continuous. We can suppose $K_j \subseteq K_{j+1}$ for every j. Let $\sigma \in (0, 1)$ be so that $\mathfrak m(B_{2R}(x_0))^{\frac{1}{p}}\sigma \le \frac{\eta}{2}$. Define

\begin{equation*} g_\eta(x) := g(x) + \sigma\chi_{B_{2R}(x_0)} + \sum_{i=1}^\infty \chi_{B_{2R}(x_0)\setminus K_i}(x). \end{equation*}

The function gη is still lower semicontinuous and belongs to ${\rm UG}^{}(u)$, since it is bigger than g. Moreover, $g_\eta \equiv 0$ on ${\rm X} \setminus B_{2R}(x_0)$. We show that (14) holds. Indeed

\begin{equation*} \Vert g - g_\eta \Vert_{L^p({\rm X})} \le \sigma \mathfrak m(B_{2R}(x_0))^{\frac{1}{p}} + \sum_{i=1}^\infty \mathfrak m(B_{2R}(x_0)\setminus K_i)^{\frac{1}{p}} \le \eta. \end{equation*}

In the next step we define auxiliary functions $\hat{g}_{\eta,j}$. Later we will slightly modify these functions in order to define the $g_{\eta,j}$’s of Step 2.

Step 4. We proceed to the definition of $\hat{g}_{\eta,j}$. Let gj be an increasing sequence of bounded Lipschitz functions such that $g_j \nearrow g$ pointwise, whose existence is guaranteed for instance by [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, corollary 4.2.3]. Define $\psi_{j}(x) := \max\{0,\min\{1, j(2R - {\sf d}(x_0, x))\}\}$. Observe that ψj is j-Lipschitz, that $\psi_{j} \le \chi_{B_{2R}(x_0)}$, that $\psi_{j} \equiv 1$ on $B_{2R - \frac{1}{j}}(x_0)$ and that $\psi_{j} \to \chi_{B_{2R}(x_0)}$ pointwise as $j\to +\infty$. We define

\begin{equation*} \hat{g}_{\eta,j}(x) = g_j(x) + \sigma\psi_j(x) + \sum_{i=1}^j \min\{j{\sf d}(x,K_i), \psi_j(x)\}. \end{equation*}

By definition, $\hat{g}_{\eta,j}$ is Lipschitz and bounded. Moreover, $\hat{g}_{\eta,j} \le g_\eta$ for every j and $\hat{g}_{\eta,j} \nearrow g_\eta$ pointwise as $j \to +\infty$.

We define auxiliary functions $\hat{u}_{\eta,j}$. In Step 6 will modify them in order to define the functions $u_{\eta,j}$ required by Step 2.

Step 5. We define $\hat{u}_{\eta,j}$. We choose $N \in \mathbb{N}$ so that $\mathfrak m(B_{2R}(x_0) \setminus K_N)^{\frac{1}{p}} \le (2M)^{-1}\eta$. We define the closed set $A := K_N \cup ({\rm X} \setminus B_R(x_0))$. Since $u{|_{K_N}}$ and $u{|_{X\setminus B_R(x_0)}}$ are continuous and since both sets are closed, then $u{|_{A}}$ is continuous as well. We set

\begin{equation*}\hat{u}_{\eta,j}(x) := \min\left\{M, \inf\left\{u(\alpha({\sf c})) + \int_{\sf c} \hat{g}_{\eta,j} \,:\, {\sf c} \in \mathscr{C}^{\frac{1}{j}}, \omega({\sf c}) = x, \alpha({\sf c}) \in A\right\} \right\}.\end{equation*}

These functions satisfy the following properties:

  1. (a) $\hat{u}_{\eta,j}\colon {\rm X} \to [-M,M]$ and $\hat{u}_{\eta,j} \le u$ on A: this follows directly from the definition.

  2. (b) $\hat{u}_{\eta,j}$ is $\max\{2Mj, \sup_{\rm X} \hat{g}_{\eta,j}\}$-Lipschitz. Indeed if $x,y\in {\rm X}$ are such that ${\sf d}(x,y) \gt \frac{1}{j}$ then $\vert \hat{u}_{\eta,j}(x) - \hat{u}_{\eta,j}(y)\vert \le 2M \le 2Mj{\sf d}(x,y)$. On the other hand, if ${\sf d}(x,y) \lt \frac{1}{j}$ then $\{x,y\}\in\mathscr{C}^{\frac{1}{j}}_{x,y}$, implying that $\vert \hat{u}_{\eta,j}(y) - \hat{u}_{\eta,j}(x)\vert \le \frac{\hat{g}_{\eta,j}(x) + \hat{g}_{\eta,j}(y)}{2}{\sf d}(x,y) \le \sup_{\rm X} \hat{g}_{\eta,j} {\sf d}(x,y)$.

  3. (c) $\hat{g}_{\eta,j} \in {\rm UG}^{\frac{1}{j}}(\hat{u}_{\eta,j})$. We prove that $\hat{u}_{\eta,j}(y) - \hat{u}_{\eta,j}(x) \le \int_{{\sf c}}\hat{g}_{\eta,j}$ for every $x,y\in {\rm X}$ and every ${\sf c} \in \mathscr{C}^{\frac{1}{j}}_{x,y}$. This is enough to show the thesis, since the integral is symmetric. If $\hat{u}_{\eta,j}(x) = M$ there is nothing to prove. Otherwise for every δ > 0 we can find a chain ${\sf c}_\delta \in \mathscr{C}^{\frac{1}{j}}$ with $\omega({\sf c}_\delta) = x$ and $\alpha({\sf c}_\delta) \in A$ such that $\hat{u}_{\eta,j}(x) \ge u(\alpha({\sf c}_\delta)) + \int_{{\sf c}_\delta} \hat{g}_{\eta,j} - \delta$. The chain ${\sf c}_{\delta} \star {\sf c}$ is admissible for the computation of the infimum in the definition of $\hat{u}_{\eta,j}(y)$, giving $\hat{u}_{\eta,j}(y) \le u(\alpha({\sf c}_\delta)) + \int_{{\sf c}_\delta} \hat{g}_{\eta,j} + \int_{\sf c} \hat{g}_{\eta,j} \le \hat{u}_{\eta,j}(x) + \int_{\sf c} g_{\eta,j} + \delta$. The thesis follows by the arbitrariness of δ.

  4. (d) $\hat{u}_{\eta,j}(x) \le \hat{u}_{\eta,k}(x)$ if $j\le k$. This follows since $\hat{g}_{\eta,j} \le \hat{g}_{\eta,k}$ and since each $\frac{1}{k}$-chain is also a $\frac{1}{j}$-chain.

  5. (e) $\hat{u}_{\eta,j}$ is constant on each $\frac{1}{j}$-chain connected component of ${\rm X} \setminus B_{2R}(x_0)$. Indeed, let $x,y \in {\rm X}$ be such that there exists a $\frac{1}{j}$-chain ${\sf c}_{x,y} \subseteq {\rm X} \setminus B_{2R}(x_0)$. Let ${\sf c} \in \mathscr{C}^{\frac{1}{j}}$ be such that $\alpha({\sf c}) \in A$ and $\omega({\sf c}) = x$. Then ${\sf c}' = {\sf c} \star {\sf c}_{x,y} \in \mathscr{C}^{\frac{1}{j}}$ and satisfies $\alpha({\sf c}') \in A$, $\omega({\sf c}') = y$. Moreover,

    \begin{equation*}\int_{{\sf c}'} \hat{g}_{\eta,j} = \int_{{\sf c}} \hat{g}_{\eta,j} + \int_{{\sf c}_{x,y}} \hat{g}_{\eta,j} = \int_{{\sf c}} \hat{g}_{\eta,j}\end{equation*}

    since $\hat{g}_{\eta,j} \equiv 0$ on ${\rm X} \setminus B_{2R}(x_0)$. This is enough to show that $\hat{u}_{\eta,j}(y) \le \hat{u}_{\eta,j}(x)$. Reversing the roles of x and y we get the opposite inequality and so that $\hat{u}_{\eta,j}(y) = \hat{u}_{\eta,j}(x)$.

Step 6. Definition of $u_{\eta,j}$ and $g_{\eta,j}$. We define $u_{\eta,j}$ with a cutoff procedure to impose that $u_{\eta,j} \equiv 0$ outside $B_{3R}(x_0)$. We piecewisely define $u_{\eta,j}$. On $B_{2R}(x_0)$ we set $u_{\eta,j} = \hat{u}_{\eta,j}$. Then we define $u_{\eta,j}$ on each $\frac{1}{j}$-chain connected component ${\rm Y}$ of ${\rm X}\setminus B_{2R}(x_0)$ in the following way. By items (e) and (a) of Step 5, we have that $\hat{u}_{\eta,j}$ is constantly equal to some $\delta_{{\rm Y}} \in [-M,0]$ on ${\rm Y}$. We define $u_{\eta,j}$ on ${\rm Y}$ by

\begin{equation*} u_{\eta,j}(x) := \begin{cases} -\frac{\delta_{\rm Y}}{R}{\sf d}(x,x_0) +3\delta_{\rm Y} &\text{if } {\sf d}(x,x_0)\in [2R,3R],\\ 0 &\text{if } {\sf d}(x,x_0) \ge 3R. \end{cases} \end{equation*}

The same proof of item (b) of Step 5 implies that $u_{\eta,j}$ is $\max\{2Mj, \sup_{\rm X} \hat{g}_{\eta,j} + \frac{M}{R}\}$-Lipschitz. Indeed, the only non-trivial case is when ${\sf d}(x,y) \lt \frac{1}{j}$. In that case, if $x,y \in B_{2R}(x_0)$ then the proof does not change. If $x,y \in {\rm X} \setminus B_{2R}(x_0)$ then they must be in the same $\frac{1}{j}$-chain connected component ${\rm Y}$ of ${\rm X} \setminus B_{2R}(x_0)$, so $\vert u_{\eta,j}(x) - u_{\eta,j}(y) \vert \le \frac{\vert \delta_Y \vert}{R}\vert {\sf d}(x,x_0) - {\sf d}(y,x_0) \vert \le \frac{M}{R} {\sf d}(x,y)$. In the last case we have $x\in B_{2R}(x_0)$ and $y\in {\rm X}\setminus B_{2R}(x_0)$. Here we have

\begin{align*}\vert u_{\eta,j}(x) - u_{\eta,j}(y) \vert& \le \vert \hat{u}_{\eta,j}(x) - \hat{u}_{\eta,j}(y) \vert + \frac{\vert \delta_{\rm Y} \vert}{R} ({\sf d}(y,x_0) - 2R) \\ &\le \sup_{{\rm X}} \hat{g}_{\eta,j} {\sf d}(x,y) + \frac{M}{R} {\sf d}(x,y),\end{align*}

where ${\rm Y}$ is the $\frac{1}{j}$-chain connected component of ${\rm X} \setminus B_{2R}(x_0)$ containing y.

It remains to define the new gradients $g_{\eta,j}$. We set

\begin{equation*}\delta_j := \sup \left\{\vert \delta_{\rm Y} \vert \,:\, {\rm Y} \in \mathscr{C}^{\frac{1}{j}}\textrm{-cc}({\rm X} \setminus B_{2R}(x_0))\right\}\end{equation*}

and

\begin{equation*}h_{\eta,j} := \frac{\delta_j}{R} \cdot \max\left\{0,\min\left\{1, 5 - \frac{{\sf d}(x_0, x)}{R}\right\}\right\}.\end{equation*}

Observe that $h_{\eta,j} = \frac{\delta_j}{R}$ on $B_{4R}(x_0)$, that $h_{\eta,j}$ is Lipschitz and that $h_{\eta,j} \equiv 0$ on ${\rm X}\setminus B_{5R}(x_0)$. Finally define $g_{\eta,j} := \hat{g}_{\eta,j} + h_{\eta,j}$. We claim that $g_{\eta,j} \in {\rm LUG}^{\frac{1}{j}}(u_{\eta,j})$. By definition, $g_{\eta,j}$ is Lipschitz, so it remains to show it is a $\frac{1}{j}$-upper gradient of $u_{\eta,j}$. Let ${\sf c} \in \mathscr{C}^{\frac{1}{j}}$. We divide ${\sf c}$ in subchains ${\sf c}_i$ such that $\omega({\sf c}_i) = \alpha({\sf c}_{i+1})$ for every i and such that each ${\sf c}_i$ is of one of the following forms:

  • ${\sf c}_i \subseteq B_{2R}(x_0)$;

  • ${\sf c}_i \subseteq A_{2R,3R}(x_0) := \overline{B}_{3R}(x_0) \setminus B_{2R}(x_0)$;

  • ${\sf c}_i \subseteq {\rm X} \setminus B_{3R}(x_0)$

  • ${\sf c}_i = \{x_i,y_i\}$ with $x_i \in B_{2R}(x_0)$ and $y_i \notin B_{2R}(x_0)$ or $x_i \notin B_{2R}(x_0)$ and $y_i \in B_{2R}(x_0)$;

  • ${\sf c}_i = \{x_i,y_i\}$ with $x_i \in B_{3R}(x_0)$ and $y_i \notin B_{3R}(x_0)$ or $x_i \notin B_{3R}(x_0)$ and $y_i \in B_{3R}(x_0)$.

In all these cases we prove that $\vert u_{\eta,j}(\omega({\sf c}_i)) - u_{\eta,j}(\alpha({\sf c}_i)) \vert \le \int_{{\sf c}_i}g_{\eta,j}$. If ${\sf c}_i \subseteq B_{2R}(x_0)$, we use item (c) of Step 5 to get $\vert u_{\eta,j}(\omega({\sf c}_i)) - u_{\eta,j}(\alpha({\sf c}_i)) \vert = \vert \hat{u}_{\eta,j}(\omega({\sf c}_i)) - \hat{u}_{\eta,j}(\alpha({\sf c}_i)) \vert \le \int_{{\sf c}_i} \hat{g}_{\eta,j} \le \int_{{\sf c}_i} g_{\eta,j}$. If ${\sf c}_i \subseteq A_{2R,3R}(x_0)$, then it must be contained in the same $\frac{1}{j}$-chain connected component ${\rm Y}$ of ${\rm X} \setminus B_{2R}(x_0)$. Therefore we have $\vert u_{\eta,j}(\omega({\sf c}_i)) - u_{\eta,j}(\alpha({\sf c}_i)) \vert \le \frac{\vert \delta_Y \vert}{R}\vert {\sf d}(\omega({\sf c}_i),x_0) - {\sf d}(\alpha({\sf c}_i),x_0) \vert \le \frac{\delta_j}{R} {\sf d}(\omega({\sf c}_i),\alpha({\sf c}_i)) \le \int_{{\sf c}_i} h_{\eta,j} \le \int_{{\sf c}_i} g_{\eta,j}$. If ${\sf c}_i \subseteq {\rm X} \setminus B_{3R}(x_0)$ then $ 0 = \vert u_{\eta,j}(\omega({\sf c}_i)) - u_{\eta,j}(\alpha({\sf c}_i)) \vert \le \int_{{\sf c}_i} g_{\eta,j}$. If ${\sf c}_i = \{x_i,y_i\}$ is as in the last two cases, we have

\begin{equation*} \begin{aligned} \vert u_{\eta,j}(y_i) - u_{\eta,j}(x_i) \vert &\le \vert \hat{u}_{\eta,j}(y_i) - \hat{u}_{\eta,j}(x_i)\vert + \frac{\vert \delta_j \vert}{R}\\ &\quad\times \max\{{\sf d}(y_i,\partial A_{2R,3R}(x_0)), {\sf d}(x_i,\partial A_{2R,3R}(x_0))\}\\ &\le \int_{\{x_i,y_i\}} \hat{g}_{\eta,j} + \int_{\{x_i,y_i\}} h_{\eta,j} = \int_{\{x_i,y_i\}} g_{\eta,j}, \end{aligned} \end{equation*}

because $\max\{{\sf d}(x_i,\partial A_{2R,3R}(x_0)), {\sf d}(y_i,\partial A_{2R,3R}(x_0))\} \le {\sf d}(x_i,y_i)$ and item (c) of Step 5. Therefore

\begin{equation*}\vert u_{\eta,j}(\omega({\sf c})) - u_{\eta,j}(\alpha({\sf c})) \vert \le \sum_i \vert u_{\eta,j}(\omega({\sf c}_i)) - u_{\eta,j}(\alpha({\sf c}_i)) \vert \le \sum_i\int_{{\sf c}_i} g_{\eta,j} = \int_{{\sf c}} g_{\eta,j}.\end{equation*}

Step 7. In this step we show that $u_{\eta,j}$ and $g_{\eta,j}$ satisfy (15) if we prove that $u_{\eta,j}$ converges pointwise to u on KN and $\hat{u}_{\eta,j}$ converges uniformly to $u \equiv 0$ on ${\rm X} \setminus B_{2R}(x_0)$ as $j\to +\infty$. Indeed, if this is true, we get that δj satisfies

(16)\begin{equation} \varlimsup_{j\to +\infty} \delta_j = \varlimsup_{j \to +\infty} \|\hat{u}_{\eta,j}\|_{L^\infty({\rm X}\setminus B_{2 R}(x_0))}= 0. \end{equation}

Therefore we obtain

\begin{equation*} \begin{aligned} &\lim_{j\to +\infty} \Vert u - u_{\eta,j} \Vert_{L^p({\rm X})} \\ &=\lim_{j\to +\infty}\left(\int_{K_N} \vert u - u_{\eta,j} \vert^p \,{\mathrm d}\mathfrak m + \int_{B_{2R}(x_0)\setminus K_N} \vert u - u_{\eta,j} \vert^p \,{\mathrm d}\mathfrak m\right.\\ & \quad\left. + \int_{{\rm X} \setminus B_{2R}(x_0)} \vert u - u_{\eta,j} \vert^p \,{\mathrm d}\mathfrak m \right)^{\frac{1}{p}} \\ &\le 0 + (2M)\mathfrak m(B_{2R}(x_0) \setminus K_N)^{\frac{1}{p}} + 0 \le \eta. \end{aligned} \end{equation*}

We used dominated convergence for the estimate of the first summand and the estimate $\vert u - u_{\eta,j} \vert \le 2M$ since both functions take values on $[-M,M]$ and $\mathfrak m(B_{2R}(x_0) \setminus K_N)^{\frac{1}{p}} \le \eta (2M)^{-1}$ for the second term. For the third term we divided the integral in two parts: on the annulus $A_{2R,3R}(x_0)$ we used again dominated convergence since $\vert u-u_{\eta,j} \vert = \vert u_{\eta,j}\vert \le \delta_j$ on it, and we can use (16), while on ${\rm X} \setminus B_{3R}(x_0)$ we have $\vert u - u_{\eta,j} \vert = 0$. This concludes the first estimate in (15). For the second one we observe that

\begin{equation*} \varlimsup_{j\to +\infty} \Vert g_{\eta,j} - g_\eta \Vert_{L^p({\rm X})} \le \varlimsup_{j\to +\infty} \left( \Vert \hat{g}_{\eta,j} - g_\eta \Vert_{L^p({\rm X})} + \Vert h_{\eta,j} \Vert_{L^p({\rm X})}\right) = 0, \end{equation*}

where we used dominated convergence on the first term, since $\hat{g}_{\eta,j} \to g_\eta$ pointwise almost everywhere and they are supported on $B_{2R}(x_0)$, and the estimate

\begin{equation*}\Vert h_{\eta,j} \Vert_{L^p({\rm X})} \le \frac{\delta_j}{R} \mathfrak m(\overline{B}_{5R}(x_0))^{\frac{1}{p}},\end{equation*}

where the limit superior of the right hand side is 0 because of (16).

In the last two steps we show that $u_{\eta,j}$ converges to u pointwise on KN and that $\hat{u}_{\eta,j}$ converges uniformly to 0 outside $B_{2R}(x_0)$.

Step 8. We prove that $u_{\eta,j}$ converges pointwise to u on KN as $j\to +\infty$. We suppose by contradiction that there exists some $x \in K_N$ such that $u_{\eta,j}(x)$ does not converge to u(x) as j goes to $+\infty$. On KN we have $u_{\eta,j} = \hat{u}_{\eta,j}$, by definition. By item (d) of Step 5, the sequence $\hat{u}_{\eta,j}(x)$ is increasing and so it admits a limit. Moreover, by item (a) of Step 5, $\hat{u}_{\eta,j}(x) \le u(x)$ for every j. So, our assumption means that $\lim_{j\to +\infty} \hat{u}_{\eta,j}(x) \lt u(x)$. Let us fix δ > 0 such that $\lim_{j\to +\infty} \hat{u}_{\eta,j}(x) \le u(x) - \delta$. Since $u(x) \le M$, we get $\hat{u}_{\eta,j}(x) \le M-\delta$ for every j. By definition, we can find chains ${\sf c}_j \in \mathscr{C}^{\frac{1}{j}}$ such that $\omega({\sf c}_j) = x$, $\alpha({\sf c}_j) \in A$ and

(17)\begin{equation} u(\alpha({\sf c}_j)) + \int_{{\sf c}_j} \hat{g}_{\eta,j} \lt u(x) - \frac{\delta}{2} \le M. \end{equation}

We consider two subchains. If ${\sf c}_j = \{q_0^j, \ldots, q_{N_j}^j = x\}$ we define ${\sf c}_j^s := \{q_0^j,\ldots,q_{i_j}^j\}$ and ${\sf c}_j^e := \{q_{k_j}^j,\ldots,q_{N_j}^j\}$, where ij is the biggest integer i such that the chain $\{q_0^j,\ldots,q_{i}^j \}$ is contained in $B_{2R - \frac{1}{j}}(x_0)$, while kj is the smallest integer k such that the chain $\{q_{k}^j,\ldots,q_{N_j}^j = x\}$ is contained in $B_{2R - \frac{1}{j}}(x_0)$. Here, the superscript stay for ‘start’ and ‘end’, respectively. Figure 1 represents these subchains in three different exhaustive situations. As $x\in K_N \subseteq B_{2R}(x_0)$, we have that $x \in B_{2R - \frac{1}{j}}(x_0)$ for j big enough. For these indices, ${\sf c}_j^e$ is not empty, as it contains at least x. Moreover, $\omega({\sf c}_j^e) = x$ and $\alpha({\sf c}_j^e) \in A$. The last assertion can be proved as follows: either ${\sf c}_j^e = {\sf c}_j$, so $\alpha({\sf c}_j^e) = \alpha({\sf c}_j) \in A$, or ${\sf d}(\alpha({\sf c}_j^e), x_0) \ge 2R - \frac{2}{j} \ge R$, because of the maximality property of ${\sf c}_j^e$, and so $\alpha({\sf c}_j^e) \in {\rm X} \setminus B_{R}(x_0) \subseteq A$. On the other hand, ${\sf c}_j^s$ can be empty, and it is empty if and only if $\alpha({\sf c}_j) \notin B_{2R - \frac{1}{j}}(x_0)$. If ${\sf c}_j^s$ is not empty then either ${\sf c}_j^s = {\sf c}_j$, so $\omega({\sf c}_j^s) = x \in A$, or $\omega({\sf c}_j^s) \notin B_{2R - \frac{2}{j}}(x_0)$ by the maximality property of ${\sf c}_j^s$, so $\omega({\sf c}_j^s) \in A$. Moreover, $\alpha({\sf c}_j^s) = \alpha({\sf c}_j) \in A$. In any case the four points $\alpha({\sf c}_j^s)$, $\omega({\sf c}_j^s)$, $\alpha({\sf c}_j^e)$, $\omega({\sf c}_j^e)$ belong to A. There are three possible cases:

  1. (1) $\alpha({\sf c}_j) \notin B_{R}(x_0)$, so $u(\alpha({\sf c}_j)) = 0 = u(\alpha({\sf c}_j^e))$. In this case, (17) implies that

    (18)\begin{equation} u(\alpha({\sf c}_j^e)) + \int_{{\sf c}_j^e} \hat{g}_{\eta,j} \le u(\alpha({\sf c}_j)) + \int_{{\sf c}_j} \hat{g}_{\eta,j} \lt u(x) - \frac{\delta}{2} = u(\omega({\sf c}_j^e)) - \frac{\delta}{2}. \end{equation}
  2. (2) $\alpha({\sf c}_j) \in B_{R}(x_0)$, which means ${\sf c}_j^s \neq \emptyset$ and $\alpha({\sf c}_j^s) = \alpha({\sf c}_j)$, and we have

    (19)\begin{equation} u(\alpha({\sf c}_j^s)) + \int_{{\sf c}_j^s} \hat{g}_{\eta,j} \lt u(\omega({\sf c}_j^s)) - \frac{\delta}{4}. \end{equation}
  3. (3) $\alpha({\sf c}_j) \in B_{R}(x_0)$ and (19) does not hold. In this case, in view of (17), we necessarily have ${\sf c}_j^s \neq {\sf c}_j \neq {\sf c}_j^e$, so $\omega({\sf c}_j^s), \alpha({\sf c}_j^e) \notin B_{2R - \frac{2}{j}}(x_0)$. Moreover

    (20)\begin{equation} \begin{aligned} u(\alpha({\sf c}_j^e)) + \int_{{\sf c}_j^e} \hat{g}_{\eta,j} = u(\omega({\sf c}_j^s)) + \int_{{\sf c}_j^e} \hat{g}_{\eta,j} &\le u(\alpha({\sf c}_j^s)) + \int_{{\sf c}_j^s} \hat{g}_{\eta,j} + \int_{{\sf c}_j^e} \hat{g}_{\eta,j} + \frac{\delta}{4}\\ &\le u(\alpha({\sf c}_j)) + \int_{{\sf c}_j} \hat{g}_{\eta,j} + \frac{\delta}{4} \\ & \lt u(x) - \frac{\delta}{4} = u(\omega({\sf c}_j^e)) - \frac{\delta}{4} \end{aligned} \end{equation}

Figure 1. The picture shows the definition of ${\sf c}_j^s$ and ${\sf c}_j^e$ in three different situations that cover all possible cases. On the left, $\alpha({\sf c}_j) \notin B_{2R-\frac{1}{j}}(x_0)$, so ${\sf c}_j^s = \emptyset$ and ${\sf c}_j^e \neq {\sf c}_j$. In the middle, $\alpha({\sf c}_j) \in B_{2R-\frac{1}{j}}(x_0)$ and ${\sf c}_j$ is contained in $B_{2R-\frac{1}{j}}(x_0)$, so ${\sf c}_j = {\sf c}_j^s = {\sf c}_j^e$. On the right, $\alpha({\sf c}_j) \in B_{2R-\frac{1}{j}}(x_0)$, but ${\sf c}_j \cap ({\rm X} \setminus B_{2R - \frac{1}{j}}(x_0)) \neq \emptyset$, so ${\sf c}_j^s \neq \emptyset$, ${\sf c}_j^s \neq {\sf c}_j$ and ${\sf c}_j^e \neq {\sf c}_j$.

One of the three cases (1), (2) or (3) holds true for infinitely many j’s. We now show how to conclude the proof supposing that case (1) occurs infinitely many times. Later we will show how to conclude in the other two cases. We restrict to the indices where (1) holds true and we do not relabel the subsequence. We claim that the assumptions of proposition 2.2 (see also the discussion in remark 2.3) are satisfied by $\{{\sf c}_j^e \}_j$.

  • Length. $\ell({\sf c}_j^e) \le \sigma^{-1}\int_{{\sf c}_j^e}\hat{g}_{\eta,j} \le \sigma^{-1}M$, where we used (18), (17) and the fact that ${\sf c}_j^e \subseteq B_{2R - \frac{1}{j}}(x_0)$, so $\hat{g}_{\eta,j} \ge \sigma$ on ${\sf c}_j^e$.

  • Diameter. Since $u{|_{A}}$ is continuous, we can take $\Delta \gt 0$ so that for every $y\in A$ such that ${\sf d}(x, y) \le \Delta$ we have $\vert u(x) - u(y) \vert \le \frac{\delta}{4}$. Since by (18) $u(\alpha({\sf c}_j^e)) \lt u(x) - \frac{\delta}{4}$, and since $\alpha({\sf c}_j^e) \in A$, we conclude that ${\rm Diam}({\sf c}_j^e) \ge {\sf d}(\alpha({\sf c}_j^e), x) \ge \Delta$.

  • h-sum. By definition, $h_j{|_{B_{2R - \frac{1}{j}}(x_0)}} \le \hat{g}_{\eta,j}{|_{B_{2R - \frac{1}{j}}(x_0)}}$, so

    \begin{equation*} \int_{{\sf c}_j^e}h_j \le \int_{{\sf c}_j^e} \hat{g}_{\eta,j}\le M, \end{equation*}

    again by (18) and (17).

Therefore the chains ${\sf c}_j^e$ subconverge to a curve γ of ${\rm X}$, by proposition 2.2 and remark 2.3. We relabel the sequence accordingly, and we denote the chains again by ${\sf c}_j^e$. Since A is closed, so $\alpha(\gamma) = \lim_{j\to +\infty} \alpha({\sf c}_j^e)$ belongs to A, and $u{|_{A}}$ is continuous, we have that $u(\alpha(\gamma)) = \lim_{j\to +\infty} u(\alpha({\sf c}_j^e)).$ This, together with Step 4, says that we are in position to apply lemma 2.7 to the functions $\hat{g}_{\eta,j} \nearrow g_\eta$ and to the sequence of chains ${\sf c}_j^e$ converging to γ, concluding that

\begin{equation*} \begin{aligned} u(\alpha(\gamma)) + \int_\gamma g_\eta \le \varliminf_{j\to +\infty} u(\alpha({\sf c}_j^e)) + \int_{{\sf c}_j^e} \hat{g}_{\eta,j} \le u(x) - \frac{\delta}{2} = u(\omega(\gamma)) - \frac{\delta}{2}. \end{aligned} \end{equation*}

Here we used (18) in the last inequality. This contradicts the fact that $g_\eta \in {\rm UG}^{}(u)$.

Suppose now that case (3) occurs for infinitely many indices j. Then (20) and the same proof given above says that $\{{\sf c}_j^e\}_j$ is a sequence of chains satisfying again the assumptions of proposition 2.2. The remaining part of the argument is exactly the same, using (20) to violate the fact that gη is an upper gradient of u.

Finally suppose that (2) occurs for infinitely many indices. Then the sequence $\{{\sf c}_j^s\}_j$ satisfies the assumptions of proposition 2.2. Indeed, the estimate of the length and the h-sum is identical, using (19) instead of (18). In the proof of the lower bound of the diameter we need to distinguish two cases. If ${\sf c}_j^s = {\sf c}_j$ then the same proof as above says that ${\rm Diam}({\sf c}_j^s) \ge \Delta$. Otherwise we have that $\omega({\sf c}_j^s) \notin B_{2R-\frac{2}{j}}(x_0)$, so ${\rm Diam}({\sf c}_j^s) \ge {\sf d}(\alpha({\sf c}_j^s), \omega({\sf c}_j^s)) \ge R - \frac{2}{j} \ge 1$. In any case, ${\rm Diam}({\sf c}_j^s) \ge \min\{\Delta, 1\} \gt 0.$ The remaining part of the argument is again the same, using (19) to violate the fact that gη is an upper gradient of u. We remark that the argument works since both endpoints of ${\sf c}_j^s$ belong to A, which is closed and on which u is continuous.

Step 9. We prove that $\hat{u}_{\eta,j}$ converges uniformly to 0 on ${\rm X}\setminus B_{2R}(x_0)$ as $j\to +\infty$. Suppose it is not the case and recall that $\hat{u}_{\eta,j} \le u$ because of item (a) of Step 5. Then we can find $0 \lt \delta \lt M$ and a sequence of points $x_j\notin B_{2R}(x_0)$ such that $\hat{u}_{\eta,j}(x_j) \lt -\delta$. By definition of $\hat{u}_{\eta,j}$ there must be a chain ${\sf c}_j = \{q_0^j,\ldots,q_{N_j}^j\} \in \mathscr{C}^{\frac{1}{j}}$ with $\alpha({\sf c}_j) \in A$ and $\omega({\sf c}_j) = x_j$ such that $u(\alpha({\sf c}_j)) + \int_{{\sf c}_j}\hat{g}_{\eta,j} \le -\frac{\delta}{2}$. Since $u{|_{{\rm X} \setminus B_{R}(x_0)}} = 0$, then $\alpha({\sf c}_j)$ must belong to $B_R(x_0)$. Let ${\sf c}_j^s = \{q_0^j,\ldots,q_{i_j}^j\}$ be the subchain of ${\sf c}_j$ with the property that ij is the biggest integer i such that $\{q_0^j,\ldots,q_{i}^j\} \subseteq B_{2R - \frac{1}{j}}(x_0)$. By maximality we have that $q_{i_j}^j \notin B_{2R - \frac{2}{j}}(x_0)$. Therefore $u(q_{i_j}^j) = u(\omega({\sf c}_j^s)) = 0$. Moreover, we have

\begin{equation*} u(\alpha({\sf c}_j^s)) + \int_{{\sf c}_j^s}\hat{g}_{\eta,j} \le u(\alpha({\sf c}_j)) + \int_{{\sf c}_j}\hat{g}_{\eta,j} \le -\frac{\delta}{2} = u(\omega({\sf c}_j^s)) - \frac{\delta}{2}. \end{equation*}

We now claim that $\{{\sf c}_j^s\}_j$ satisfies the assumptions of proposition 2.2. The proof of the upper bound on the length is the same given in Step 8, since $\hat{g}_{\eta,j} \ge \sigma$ on ${\sf c}_j^s$, so

\begin{equation*}\ell({\sf c}_j^s) \le \sigma^{-1}\int_{{\sf c}_j^s} \hat{g}_{\eta,j} \le \sigma^{-1}\left(-u(\alpha({\sf c}_j^s)) - \frac{\delta}{2}\right) \le \sigma^{-1}M.\end{equation*}

For the diameter we have: ${\rm Diam}({\sf c}_j^s) \ge {\sf d}(\alpha({\sf c}_j^s), \omega({\sf c}_j^s)) \ge R - \frac{2}{j}$, since $\alpha({\sf c}_j^s) \in B_R(x_0)$ and $\omega({\sf c}_j^s) \notin B_{2R - \frac{2}{j}}(x_0)$. Finally,

\begin{equation*}\int_{{\sf c}_j^s} h_j \le \int_{{\sf c}_j^s} \hat{g}_{\eta,j} \le M.\end{equation*}

Using again proposition 2.2 and remark 2.3, we conclude that the sequence of chains $\{{\sf c}_j^s\}_j$ subconverges to a curve γ of ${\rm X}$. Arguing as in Step 8 we deduce that gη violates the upper gradient inequality on γ since the endpoints of ${\sf c}_j^s$ belong to A and u is continuous on A. This is a contradiction.

Remark 6.6. By proposition 4.4 we know that every Borel ɛ-upper gradient with finite values is an upper gradient in the classical sense. Therefore the proof above implies that if $({\rm X},{\sf d})$ is complete then $\tilde{\rm F}_{{\rm curve}}(u)$ can be realized as an infimum of the $L^p({\rm X})$-norms of Lipschitz upper gradients of Lipschitz functions that converge to u in $L^p({\rm X})$.

As announced, the proof of theorem 6.4 can be adapted to show that $H^{1,p}_{{\rm \mathscr{C}^{}, \, {\rm Lip}}}({\rm X})$ and $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})$ are isometric if $({\rm X},{\sf d})$ is complete.

Theorem 6.7. Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space such that $({\rm X},{\sf d})$ is complete. Then

\begin{equation*} H^{1,p}_{{\rm \mathscr{C}^{},\, {\rm Lip}}}({\rm X}) = H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X}) \end{equation*}

and

\begin{equation*} \|u \|_{H^{1,p}_{{\rm \mathscr{C}^{},\, {\rm Lip}}}({\rm X})} = \|u \|_{H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})} \end{equation*}

for every $u\in L^p({\rm X})$.

Before doing that we need a version of the Leibniz rule for chain upper gradients.

Proposition 6.8 (Leibniz rule)

Let $u\colon {\rm X} \to \mathbb{R}$ be Borel and $\varphi \in {\rm Lip}({\rm X})$. For every $g \in {\rm UG}^\varepsilon(u)$, we have

(21)\begin{equation} \vert u \vert\, {\rm sl}_\varepsilon \varphi + Q_\varepsilon \varphi\, g \in {\rm UG}^\varepsilon(u \varphi), \end{equation}

where $Q_\varepsilon\varphi(x):= \sup_{y\in \overline{B}_\varepsilon(x)} \vert \varphi \vert(y)$. In particular, for every $u \in \mathcal{L}^p({\rm X})$ it holds that

\begin{equation*} {\rm F}_{\mathscr{C}^{}}(u\varphi) \le \left( \int |u|^p ({\rm lip}\,\varphi)^p \,{\mathrm d} \mathfrak m \right)^{\frac{1}{p}} + \|\varphi\|_{L^\infty({\rm X})}\, {\rm F}_{\mathscr{C}^{}}(u). \end{equation*}

Proof. Let $g \in {\rm UG}^\varepsilon(u)$. We verify (21). Given ${\sf c} =\{q_i\}_{i=0}^N \in \mathscr{C}^{\varepsilon}$, we compute

\begin{align*} |&(u \varphi)(\omega({\sf c}))- (u \varphi)(\alpha({\sf c}))| \le \sum_{i=0}^{N-1} |(u \varphi)(q_{i+1})- (u \varphi)(q_{i})|\\ & \le \sum_{i=0}^{N-1} \bigg| u(q_{i+1})\varphi(q_{i+1}) -\frac{1}{2} u(q_{i+1})\varphi(q_{i}) + \frac{1}{2} u(q_{i+1})\varphi(q_{i}) \\ &\qquad\quad-\frac{1}{2}u(q_{i}) \varphi(q_{i+1})+ \frac{1}{2}u(q_{i}) \varphi(q_{i+1}) - u(q_{i}) \varphi(q_{i}) \bigg| \\ &\le \sum_{i=0}^{N-1} \left( \frac{1}{2}\vert u(q_{i+1}) \vert{\rm sl}_\varepsilon \varphi(q_{i+1}) + \frac{1}{2}\vert u(q_{i}) \vert{\rm sl}_\varepsilon \varphi(q_{i}) \right)\,{\sf d}(q_i,q_{i+1})\\ &\qquad\quad+ \frac{1}{2}\vert \varphi(q_{i+1})\vert \vert u(q_{i+1})-u(q_{i}) \vert + \frac{1}{2}\vert \varphi(q_{i}) \vert \vert u(q_{i+1})-u(q_{i}) \vert \\ & \le \int_{{\sf c}} \vert u \vert\,{\rm sl}_{\varepsilon} \varphi + \sum_{i=0}^{N-1} \frac{1}{4} (\vert \varphi(q_{i})\vert + \vert\varphi(q_{i+1})\vert)(g(q_i)+g(q_{i+1})){\sf d}(q_i,q_{i+1})\\ & \le \int_{{\sf c}} \vert u \vert\,{\rm sl}_{\varepsilon} \varphi + \sum_{i=0}^{N-1} \frac{(Q_\varepsilon \varphi\, g)(q_i) + (Q_\varepsilon \varphi\, g)(q_{i+1})}{2} {\sf d}(q_i,q_{i+1}) = \int_{{\sf c}} (\vert u \vert \,{\rm sl}_{\varepsilon} \varphi + Q_\varepsilon \varphi\, g). \end{align*}

Now, to estimate ${\rm F}_{\mathscr{C}^{}}(\varphi\, u)$, we use that $\inf \{\|h\|_{L^p({\rm X})}:\, h \in {\rm UG}^\varepsilon(u \varphi),\, h \text{Borel}\}$ is less than or equal to

\begin{equation*} \begin{aligned} &\inf\{\| |u|\,{\rm sl}_{\varepsilon} \varphi + Q_\varepsilon \varphi\, g \|_{L^p({\rm X})}\,:\, g\in {\rm UG}^{\varepsilon}(u),\, g\ \text{Borel} \} \\ \le &\inf\{\| |u| \,{\rm sl}_{\varepsilon} \varphi \|_{L^p({\rm X})} + \| Q_\varepsilon \varphi\, g \|_{L^p({\rm X})}\,:\, g\in {\rm UG}^{\varepsilon}(u),\, g\ \text{Borel} \}\\ \le &\left(\int |u|^p\,{\rm sl}_{\varepsilon}\varphi^p\,{\mathrm d} \mathfrak m\right)^{\frac{1}{p}} + \| \varphi \|_{L^\infty({\rm X})} \inf\{\|g\|_{L^p({\rm X})}:\, g \in {\rm UG}^\varepsilon(u),\, g\ \text{Borel} \}\\ \end{aligned} \end{equation*}

where we used that $\|Q_\varepsilon \varphi\|_{L^\infty({\rm X})} =\|\varphi\|_{L^\infty({\rm X})}$. By taking the limit as ɛ → 0 and using the fact that φ is Lipschitz, dominated convergence and the definition of ${\rm F}_{\mathscr{C}^{}}(\cdot)$, we get the conclusion.

Proof of theorem 6.7

The proof is a modification of the one of theorem 6.4. We highlight what are the differences in each step.

Step 1. The proof does not change if we show that for every $u\in L^p({\rm X})$ we can find a sequence of bounded functions uj with bounded support such that $u_j \to u$ in $L^p({\rm X})$ and $\varliminf_{j\to +\infty}{\rm F}_{\mathscr{C}^{}}(u_j) \le {\rm F}_{\mathscr{C}^{}}(u)$. We fix a basepoint $x_0 \in {\rm X}$ and we consider the 1-Lipschitz function $\varphi_j(x) = \max\{0,\min\{1,j+1-{\sf d}(x,x_0)\}\}$. We define $u_j := \min \{j, \max\{-j, \varphi_j u\}\}$. By definition, uj is bounded and has bounded support. Moreover, every ɛ-upper gradient of $\varphi_j u$ is also a ɛ-upper gradient of uj. This implies that ${\rm F}_{\mathscr{C}^{}}(u_j) \le {\rm F}_{\mathscr{C}^{}}(\varphi_j u)$ for every j. proposition 6.8 implies that

\begin{equation*} \begin{aligned} \varliminf_{j\to +\infty} {\rm F}_{\mathscr{C}^{}}(u_j) \le \varliminf_{j\to +\infty} {\rm F}_{\mathscr{C}^{}}(\varphi_j u) &\le \varliminf_{j\to +\infty}\left( \int |u|^p ({\rm lip}\,\varphi_j)^p \,{\mathrm d} \mathfrak m \right)^{\frac{1}{p}} + \|\varphi_j\|_{L^\infty}\, {\rm F}_{\mathscr{C}^{}}(u)\\ &= \varliminf_{j\to +\infty}\left( \int_{\overline{B}_{j+1}(x_0)\setminus B_j(x_0)} |u|^p \,{\mathrm d} \mathfrak m \right)^{\frac{1}{p}} +{\rm F}_{\mathscr{C}^{}}(u)\\ & = {\rm F}_{\mathscr{C}^{}}(u), \end{aligned} \end{equation*}

where in the last equality we used that $u\in L^p({\rm X})$.

Step 2. It does not change. In particular the claim we have to prove is the following. For every ɛ > 0, for every Borel ɛ-upper gradient $g\in {\rm UG}^{\varepsilon}(u)$ and for every η > 0 there exists another Borel ɛ-upper gradient $g_\eta \in {\rm UG}^{\varepsilon}(u)$ such that

\begin{equation*} \Vert g - g_\eta \Vert_{L^p({\rm X})} \lt \eta \end{equation*}

and with the following property. For every $j\in \mathbb{N}$ there exist functions $u_{\eta,j} \colon {\rm X} \to \mathbb{R}$ and $g_{\eta,j}\in {\rm LUG}^{\frac{1}{j}}(u_{\eta,j})$ such that

\begin{equation*} \varlimsup_{j\to +\infty} \Vert u_{\eta,j} - u \Vert_{L^p({\rm X})} \le 2\eta \quad \text{and} \quad \lim_{j\to +\infty} \Vert g_{\eta,j} - g_\eta \Vert_{L^p({\rm X})} = 0. \end{equation*}

Let $R\ge 1$ be such that $u\equiv 0$ on ${\rm X}\setminus B_{R}(x_0)$. We recall that it is enough to consider chain upper gradients $g\in {\rm UG}^{\varepsilon}(u)$ that are lower semicontinuous and such that $g \equiv 0$ on ${\rm X} \setminus B_{2R}(x_0)$, by a truncation argument.

Step 3. The definition of gη does not change. Observe that $g_\eta \equiv 0$ on ${\rm X} \setminus B_{2R}(x_0)$ and that $g_\eta \in {\rm UG}^{\varepsilon}(u)$ because $g_\eta \ge g$.

Step 4. The definition of $\hat{g}_{\eta,j}$ does not change and satisfies the same properties.

Step 5. Here there is a difference in the definition of the set A. Since $g_\eta \in L^p({\rm X})$ then $\mathfrak m(\{g_\eta = +\infty \})=0$. By outer regularity of the measure we can find an open set Uη containing $\{g_\eta = +\infty \}$ and such that $\mathfrak m(U_\eta)^{\frac{1}{p}} \lt \eta (2M)^{-1}$. Moreover, since $g_\eta \equiv 0$ on ${\rm X} \setminus B_{2R}(x_0)$, we can choose Uη such that $U_\eta \subseteq B_{2R}(x_0)$. Now we change the definition of the set A by setting $A := (K_N \cup ({\rm X} \setminus (B_R(x_0))) \setminus U_\eta$. It is still closed and $u{|_{A}}$ is still continuous. Now, the definition of $\hat{u}_{\eta,j}$ does not change, except for the fact that we use this set A. Properties (a)-(e) continue to hold.

Step 6. The definitions of $u_{\eta,j}$ and $g_{\eta,j}$ do not change.

Step 7. Here we claim that it is enough to show that $u_{\eta,j}(x)$ converges to u(x) for every $x\in K_N\setminus U_\eta$ as $j \to +\infty$ and that $\hat{u}_{\eta,j}$ converges uniformly to 0 on ${\rm X}\setminus B_{2R}(x_0)$. Indeed if this is true we have

\begin{equation*} \begin{aligned} &\lim_{j\to +\infty} \int_{\rm X} \Vert u - u_{\eta,j} \Vert_{L^p({\rm X})} \,{\mathrm d}\mathfrak m \\ &\le\lim_{j\to +\infty}\left(\int_{K_N \setminus U_\eta} \vert u - u_{\eta,j} \vert^p \,{\mathrm d}\mathfrak m + \int_{(B_{2R}(x_0)\setminus K_N) \cup U_\eta} \vert u - u_{\eta,j} \vert^p \,{\mathrm d}\mathfrak m\right.\\ &\quad \left.+ \int_{({\rm X} \setminus B_{2R}(x_0))} \vert u - u_{\eta,j} \vert^p \,{\mathrm d}\mathfrak m \right)^{\frac{1}{p}}\\ &\le 0 + (2M) \mathfrak m(B_{2R}(x_0) \setminus K_N)^{\frac{1}{p}} + (2M)\mathfrak m(U_\eta)^{\frac{1}{p}} + 0 \le 2\eta. \end{aligned} \end{equation*}

Step 8. Here we need an additional argument that justifies the different choice of A. Indeed, we claim that in any of three cases, the limit curve has its extreme points in A.

In cases (1) and (3) this is true because either $\alpha({\sf c}_j^e) = \alpha({\sf c}_j) \in A$ by definition, or ${\sf d}(\alpha({\sf c}_j^e), {\rm X} \setminus B_{2R}(x_0)) \le \frac{2}{j}$, by maximality of ${\sf c}_j^e$. In the first case the result is trivial because A is closed and $u{|_{A}}$ is continuous. In the second case $\alpha(\gamma) = \lim_{j\to +\infty} \alpha({\sf c}_j^e) \in {\rm X} \setminus B_{2R}(x_0)$, so $\alpha(\gamma)\in A$ since $U_\eta \subseteq B_{2R}(x_0)$. Moreover, $u(\alpha(\gamma)) = 0 = u(\alpha({\sf c}_j^e))$ because all these points belong to ${\rm X} \setminus B_R(x_0)$. On the other hand $\omega(\gamma) = \lim_{j\to +\infty} \omega({\sf c}_j^e) = x \in A$ because x is chosen in $K_N\setminus U_\eta$. Here, we have $u(\omega(\gamma)) = u(x) = u(\omega({\sf c}_j^e))$.

In case (2) we have that $\alpha(\gamma) = \lim_{j\to +\infty} \alpha({\sf c}_j^s) \in A$, because $\alpha({\sf c}_j^s) \in A$ for every j and A is closed. Moreover, $u(\alpha(\gamma)) = \lim_{j\to +\infty} u(\alpha({\sf c}_j^s))$ since $u{|_{A}}$ is continuous. On the other hand, either $\omega({\sf c}_j^s) = x$ for every j, so $\omega(\gamma) = x \in A$ and $u(\omega(\gamma)) = u(x) = u(\omega({\sf c}_j^s))$, or ${\sf d}(\omega({\sf c}_j^s), {\rm X} \setminus B_{2R}(x_0)) \le \frac{2}{j}$, by maximality of ${\sf c}_j^s$. Arguing as before, we get that $\omega(\gamma) \in {\rm X} \setminus B_{2R}(x_0)$, so it belongs to A and $u(\omega(\gamma)) = 0 = u(\omega({\sf c}_j^s))$.

In every case, the extreme points of γ belong to the set $\{g_\eta \lt +\infty\}$. Hence gη satisfies the upper gradient inequality along γ because of proposition 4.4, while the proof shows that this is not the case, giving a contradiction.

Step 9. The proof does not change, using the same modifications we did in Step 8.

The combination of theorems 6.4, 6.7 and proposition 3.2 gives the proof of theorem 1.1.

The next theorem states that the two spaces defined via chains do not change if we take the completion.

Theorem 6.9. Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space and let $(\bar{{\rm X}}, \bar{{\sf d}}, \bar{\mathfrak m})$ be its completion. Then the identity map $\iota\colon L^p({\rm X}) \to L^p(\bar{{\rm X}})$ induces isometries between $H^{1,p}_{{\rm \mathscr{C}^{}, \, {\rm Lip}}}({\rm X})$ and $H^{1,p}_{{\rm \mathscr{C}^{}, \, {\rm Lip}}}(\bar{{\rm X}})$ and between $H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})$ and $H^{1,p}_{{\rm \mathscr{C}^{}}}(\bar{{\rm X}})$.

Proof. Let $u_0\in \mathcal{L}^p(\bar{{\rm X}})$ be any representative of $\iota(u)$. Since ɛ-chains in ${\rm X}$ are ɛ-chains in $\bar{{\rm X}}$, restrictions of $\bar{\mathfrak m}$-measurable elements in ${\rm UG}^{\varepsilon}(u_0)$ are $\mathfrak m$-measurable and belong to ${\rm UG}^{\varepsilon}(u)$ and the property of being Lipschitz is preserved. Thus, $H^{1,p}_{{\rm \mathscr{C}^{}}}(\bar{{\rm X}})\subseteq \iota(H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X}))$ and $\|u\|_{H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})} \le \| \iota(u) \|_{H^{1,p}_{{\rm \mathscr{C}^{}}}(\bar{{\rm X}})}$ for every $u \in L^p({\rm X})$, and similarly $H^{1,p}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}(\bar{{\rm X}})\subseteq \iota(H^{1,p}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}({\rm X}))$ and $\|u\|_{H^{1,p}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}({\rm X})} \le \| \iota(u) \|_{H^{1,p}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}(\bar{{\rm X}})}$ for every $u \in L^p({\rm X})$.

For the other inequality we proceed in two different ways. If $u \in L^p({\rm X}) \cap {\rm Lip}({\rm X})$ and $g\in {\rm LUG}^{\varepsilon}(u) \cap L^p({\rm X})$, then we consider the Lipschitz extensions $\bar{u}$, $\bar{g}$ of u and g on $\bar{{\rm X}}$. We claim that $\bar{g}\in {\rm LUG}^{\frac{\varepsilon}{2}}(\bar{u}) \cap L^p({\rm X})$. Indeed, given a chain ${\sf c} = \{q_i\}_{i=0}^N \in \mathscr{C}^{\frac{\varepsilon}{2}}(\bar{{\rm X}})$ we can find sequence of chains ${\sf c}_j = \{q_i^j\}_{i=0}^N \in \mathscr{C}^{\varepsilon}({\rm X})$ such that $q_i^j$ converges to qi for every $i=0,\ldots,N$ as $j\to +\infty$. By continuity of $\bar{g}$ and $\bar{u}$ we then have

\begin{equation*}\bar{u}(\omega({\sf c})) - \bar{u}(\alpha({\sf c})) = \lim_{j\to +\infty} u(\omega({\sf c}_j)) - u(\alpha({\sf c}_j)) \le \lim_{j\to +\infty}\int_{{\sf c}_j} g = \int_{{\sf c}}\bar{g}.\end{equation*}

Therefore $\tilde{\rm F}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}(\iota(u)) = \tilde{\rm F}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}(\bar{u}) \le {\rm F}_{\mathscr{C}^{},\,{\rm Lip}}(u)$ for every $u\in L^p({\rm X})$. This is enough to conclude that $\iota(H^{1,p}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}({\rm X})) \subseteq H^{1,p}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}(\bar{{\rm X}})$ and $\|\iota(u) \|_{H^{1,p}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}(\bar{{\rm X}})} \le \|u\|_{H^{1,p}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}({\rm X})}$ for every $u \in L^p({\rm X})$.

For the remaining inequality we recall that in the definition of ${\rm F}_{\mathscr{C}^{}}(u)$ the infimum of the $L^p({\rm X})$-norms can be taken among the p-weak ɛ-upper gradients of u that are $\mathfrak m$-measurable. Moreover, since $\mathfrak m(\bar{{\rm X}}\setminus {\rm X}) = 0$, then ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}(\bar{{\rm X}}\setminus {\rm X})) = 0$. This means that every $g\in {\rm WUG}_{p}^{\varepsilon}(u) \cap L^p({\rm X})$ defines a $\bar{\mathfrak m}$-measurable function on $\bar{{\rm X}}$, by (2), which belongs to ${\rm WUG}_{p}^{\varepsilon}(u_0) \cap L^p(\bar{{\rm X}})$, where $u_0\in \mathcal{L}^p(\bar{{\rm X}})$ is any representative of $\iota(u)$. Hence ${\rm F}_{\mathscr{C}^{}}(u_0) \le {\rm F}_{\mathscr{C}^{}}(u)$ for every $u\in L^p({\rm X})$. This implies that $\iota(H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})) \subseteq H^{1,p}_{{\rm \mathscr{C}^{}}}(\bar{{\rm X}})$ and $\|\iota(u) \|_{H^{1,p}_{{\rm \mathscr{C}^{}}}(\bar{{\rm X}})} \le \|u\|_{H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})}$ for every $u \in L^p({\rm X})$.

Theorem 1.2. Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space, possibly non-complete. Then

\begin{equation*} H^{1,p}_{{\rm \mathscr{C}^{},\, {\rm Lip}}}({\rm X}) = H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X}) = H^{1,p}_{{\rm AGS}}({\rm X}) \end{equation*}

and

\begin{equation*} \|u \|_{H^{1,p}_{{\rm \mathscr{C}^{},\, {\rm Lip}}}({\rm X})} = \|u \|_{H^{1,p}_{{\rm \mathscr{C}^{}}}({\rm X})} = \|u \|_{H^{1,p}_{{\rm AGS}}({\rm X})} \end{equation*}

for every $u\in L^p({\rm X})$.

Proof. Direct consequence of theorems 1.1, 6.9 and proposition 3.3.

6.2. Comments on the main results with the λ-integral

If one considers $(\varepsilon, \lambda)$-upper gradients instead of ɛ-upper gradients, for $\lambda \in [0,1]$, one defines natural variants of the functionals ${\rm F}_{\mathscr{C}}$ and ${\rm F}_{\mathscr{C},\,{\rm Lip}}$, denoted by ${\rm F}_{\mathscr{C}^{}}^\lambda$ and ${\rm F}_{\mathscr{C}^{},\,{\rm Lip}}^\lambda$. Their relaxations are $\tilde{\rm F}_{{\rm \mathscr{C}^{}}}^\lambda$ and $\tilde{\rm F}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}^\lambda$. Let us outline some differences.

For $\lambda \neq \frac{1}{2}$, the symmetric property in (6) does not hold, see remark 2.1. Therefore, it is not obvious that ${\rm F}_{\mathscr{C}^{}}^\lambda$ and ${\rm F}_{\mathscr{C}^{},\,{\rm Lip}}^\lambda$ satisfy property (c) of section 2.1. This is due to the fact that it is not true in general that if $g\in {\rm UG}^{\varepsilon,\lambda}(u)$ then $g \in {\rm UG}^{\varepsilon,\lambda}(-u)$ when $\lambda \neq \frac12$. However the same proofs of theorems 6.4 and 6.7 show that $\tilde{\rm F}_{{\rm \mathscr{C}^{},\,{\rm Lip}}}^\lambda(u) = \tilde{\rm F}_{{\rm curve}}(u) = \tilde{\rm F}_{{\rm \mathscr{C}^{}}}^\lambda(u)$ for every $u\in L^p({\rm X})$, if $({\rm X},{\sf d})$ is complete. In particular, a posteriori, $\tilde{\rm F}_{{\rm \mathscr{C}^{}}}^\lambda$ and $\tilde{\rm F}_{{\rm \mathscr{C}^{},{\rm Lip}}}^\lambda$ are seminorms, when $({\rm X},{\sf d})$ is complete and the related Sobolev spaces are denoted by $H^{1,p}_{{\rm \mathscr{C}^{}, \, {\rm Lip},\lambda}}$ and $H^{1,p}_{{\rm \mathscr{C}^{},\lambda}}$. There are some subtleties to be taken into consideration.

First, the proof of proposition 6.8 holds for every $\lambda \in [0,1]$ under the additional assumption that $\varphi \ge 0$, which is enough to perform Step 1 in the proof of theorem 6.7. One can follows verbatim the same proof removing the absolute values except for the first term in the third line and replacing the first two $\frac12$-factors on the second line with $(1-\lambda)$ and the other two $\frac12$-factors with λ.

Second, one can use remark 2.10 to arrive to a contradiction in Steps 8 and 9.

Theorem 6.9 holds also for the spaces $H^{1,p}_{{\rm \mathscr{C}^{}, \, {\rm Lip},\lambda}}$ and $H^{1,p}_{{\rm \mathscr{C}^{},\lambda}}$, for every $\lambda \in [0,1]$. For $H^{1,p}_{{\rm \mathscr{C}^{}, \, {\rm Lip},\lambda}}$ the proof is identical. Also for $H^{1,p}_{{\rm \mathscr{C}^{},\lambda}}$, when $\lambda \in (0,1)$, the proof is the same, in view of remark 5.8. When $\lambda \in \{0,1\}$ one needs a different argument because it is no more true that ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon,1}(\mathscr{C}^{}(\bar{{\rm X}}\setminus {\rm X})) = 0$ and similarly for λ = 0, see remark 5.8. We do it for the case λ = 1, the other being similar. We extend $u\in \mathcal{L}^p({\rm X})$ as $\bar{u}(z) := \lim_{r\to 0}\sup_{w\in B_r(z) \cap {\rm X}} u(w)$, for $z \in \bar{{\rm X}}\setminus {\rm X}$. Moreover, we extend every $\mathfrak m$-measurable $g\in {\rm UG}^{\varepsilon,1}(u)$ on ${\rm X}$ as $+\infty$ on $\bar{{\rm X}}\setminus {\rm X}$. This is a $\bar{\mathfrak m}$-measurable function by (2). We claim that $\bar{g}\in {\rm UG}^{\frac{\varepsilon}{2},1}(\bar{u})$. Let ${\sf c} = \{q_i\}_{i=0}^N \in \mathscr{C}^{\frac{\varepsilon}{2}}(\bar{{\rm X}})$. If there exists $i \in \{0,\ldots, N-1\}$ such that $q_i \in \bar{{\rm X}}\setminus {\rm X}$ then ${}^1{\int}_{{\sf c}} g = +\infty$ and there is nothing to prove. Otherwise $q_i \in {\rm X}$ for every $i\in \{0,\ldots,N-1\}$. For every $w \in B_r(\omega({\sf c})) \cap {\rm X}$ we have that ${\sf c}_w := \{q_0,\ldots,q_{N-1},w\}$ is a ɛ-chain contained in ${\rm X}$ if $r \lt \frac{\varepsilon}{2}$. For every $0 \lt r \lt \frac{\varepsilon}{2}$ we have

\begin{align*} \bar{u}(\omega({\sf c})) - \bar{u}(\alpha({\sf c})) &\le \sup_{w\in B_r(\omega({\sf c})) \cap {\rm X}} u(w) - u(\alpha({\sf c})) \le \sup_{w\in B_r(\omega({\sf c})) \cap {\vphantom{\int}}^{\rm X}}{}^1{\!\!\!}{\int}_{\!\!\!{\sf c}_w} g \\ &= \sup_{w\in B_r(\omega({\sf c})) \cap {\rm X}} \sum_{i=0}^{N-2} g(q_i){\sf d}(q_i,q_{i+1}) + g(q_{N-1}){\sf d}(q_{N-1},w) \\ &\le \sum_{i=0}^{N-2} g(q_i){\sf d}(q_i,q_{i+1}) + g(q_{N-1})({\sf d}(q_{N-1},q_N) + r). \end{align*}

By taking r → 0 on the right hand side we get $\bar{u}(\omega({\sf c}) - \bar{u}(\alpha({\sf c})) \le {}^1{\int}_{{\sf c}} \bar{g}$. This is enough to conclude the proof.

As a consequence, the spaces $H^{1,p}_{{\rm \mathscr{C}^{}, \lambda}}({\rm X})$ are all isometric, for every possible value of $\lambda \in [0,1]$, even when ${\rm X}$ is not complete. The same holds for the spaces $H^{1,p}_{{\rm \mathscr{C}^{},\,{\rm Lip}, \lambda}}({\rm X})$.

7. Poincaré inequality

We recall the notion of the Poincaré inequality that we now discuss. Let $u\colon {\rm X} \to \mathbb{R}$, $g\colon {\rm X} \to [0,+\infty]$ be locally integrable and let $p \ge 1$. We say that the couple (u, g) satisfies a p-Poincaré inequality if there exists $\lambda, C \geq 1$ such that

\begin{equation*} -\!\!\!\!\!\!\int_{B_{r}(x)} \left\vert u - -\!\!\!\!\!\!\int_{B_r(x)}u\,{\mathrm d}\mathfrak m\right\vert \,{\mathrm d} \mathfrak m \leq Cr\left(-\!\!\!\!\!\!\int_{B_{\lambda r}(x)} g^p\,{\mathrm d}\mathfrak m\right)^{\frac{1}{p}} \end{equation*}

for every ball $B_r(x)\subseteq {\rm X}$. The following result is a consequence of theorem 1.2 and proposition 3.3.

Corollary 7.1. Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space. Then it satisfies a p-Poincaré inequality for all couples $(u,{\rm lip}\,u)$, where $u\in {\rm Lip}({\rm X})$, if and only if it satisfies a p-Poincaré inequality for all couples (u, g), where u is Borel and $g\in {\rm UG}^{\varepsilon}(u)$ for some ɛ > 0 is $\mathfrak m$-measurable, with same constants. Moreover, this happens if and only if the metric completion $(\bar{{\rm X}},\bar{{\sf d}},\bar{\mathfrak m})$ satisfies a p-Poincaré inequality for couples $(u,{\rm lip}\,u)$, where $u\in {\rm Lip}(\bar{{\rm X}})$.

Proof. Let us consider the first equivalence. The if implication is trivial by lemma 4.3 and dominated convergence. The converse implication follows by applying theorem 1.2 to the metric measure space $(B_{\lambda r}(x), {\sf d},\mathfrak m)$. This gives a sequence $u_j \in {\rm Lip}(B_{\lambda r}(x))$ such that $u_j \to u$ in $L^p(B_{\lambda r}(x))$ and such that

\begin{equation*}\varliminf_{j\to +\infty} \| {\rm lip}\, u_j\|_{L^p(B_{\lambda r}(x))} = \tilde{\rm F}_{{\rm \mathscr{C}^{}}}(u) \le {\rm F}_{\mathscr{C}^{}}(u) \le \| g\|_{L^p(B_{\lambda r}(x))}.\end{equation*}

Note that $\tilde{\rm F}_{{\rm \mathscr{C}^{}}}(u)$ and ${\rm F}_{\mathscr{C}}(u)$ are defined on the metric measure space $(B_{\lambda r}(x), {\sf d},\mathfrak m)$. To apply the hypothesis and conclude, we consider any Lipschitz extension $\tilde{u}_j \in {\rm Lip}({\rm X})$ of uj. The last equivalence follows from proposition 3.3.

Remark 7.2. The previous corollary is not true if we consider the p-Poincaré inequality for all couples (u, g) with u Borel and $g\in {\rm UG}^{}(u)$. Indeed the metric measure space $({\rm X},{\sf d},\mathfrak m) = ([0,1]\setminus \mathbb{Q},{\sf d}_e, \mathcal{L}^1 {|_{[0,1]\setminus \mathbb{Q}}})$ satisfies the 1-Poincaré inequality for all couples $(u,{\rm lip}\,u)$ with $u\in {\rm Lip}({\rm X})$, because of corollary 7.1. However, it does not satisfy the 1-Poincaré inequality for all couples (u, g) with u Borel and $g\in {\rm UG}^{}(u)$. Indeed $g \equiv 0$ is an upper gradient of every function u.

Remark. As a consequence of remark 6.6 we have the following fact. A metric measure space $({\rm X},{\sf d},\mathfrak m)$ such that $({\rm X},{\sf d})$ is complete satisfies a p-Poincaré inequality with respect to couples (u, g), $g\in {\rm UG}^{}(u)$, if and only if it satisfies a p-Poincaré inequality with respect to couples (u, g), $g\in {\rm UG}^{}(u)$ with $u \in {\rm Lip}({\rm X})$ and $g\in {\rm UG}^{}(u) \cap {\rm Lip}({\rm X})$, with same constants. This result sharpens [Reference Keith24, theorem 2], in which $\mathfrak m$ is required to be doubling and whose proof does not say that the constants of the Poincaré inequalities are the same, compare also with [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, theorem 8.4.1].

7.1. Pointwise estimates with Riesz potential via chains

When the metric measure space is doubling, the Poincaré inequality is usually expressed in terms of pointwise estimates. We extend these classical results to our setting. In order to do that, we recall that, given a Borel function $u\colon {\rm X} \to \mathbb{R}$, a point $x\in {\rm X}$ is called a Lebesgue point of u if $u(x) = \lim_{r\to 0} -\!\!\!\!\!\!\int_{B_r(x)} u\,{\mathrm d}\mathfrak m$. The set of Lebesgue points of u is denoted by ${\rm Leb}(u) \subseteq {\rm X}$. If $u\in L^p({\rm X})$ for some $1\le p \lt +\infty$ then $\mathfrak m({\rm X}\setminus{\rm Leb}(u)) = 0$.

Proposition 7.4. Let $({\rm X},{\sf d},\mathfrak m)$ be a doubling metric measure space. The following properties are quantitatively equivalent:

  1. (i) ${\rm X}$ satisfies a p-Poincaré inequality for all couples $(u,{\rm lip}\,u)$, with $u\in {\rm Lip}({\rm X})$;

  2. (ii) there exist C > 0 and $L\ge 1$ such that for every Borel $u\colon {\rm X} \to \mathbb{R}$, for every $x,y\in {\rm Leb}(u)$, for every ɛ > 0 and for every $\mathfrak m$-measurable $g\in {\rm UG}^{\varepsilon}(u)$, it holds

    (22)\begin{equation} \vert u(x) - u(y) \vert ^p \le C{\sf d}(x,y)^{p-1}\int g^p \,{\mathrm d}\mathfrak m_{x,y}^L; \end{equation}
  3. (iii) there exist C > 0 and $L\ge 1$ such that for every $u\in {\rm Lip}({\rm X})$ and for every $x,y\in {\rm X}$ it holds

    (23)\begin{equation} \vert u(x) - u(y) \vert ^p \le C{\sf d}(x,y)^{p-1}\int ({\rm lip}\,u)^p \,{\mathrm d}\mathfrak m_{x,y}^L. \end{equation}

The measure $\mathfrak m_{x,y}^L$ appearing in (22) and (23) is defined as $R_{x,y}^L \mathfrak m$, where $R_{x,y}^L$ is the L-truncated Riesz potential with poles at $x,y$, namely

\begin{equation*}R_{x,y}^L(z) := \left(\frac{{\sf d}(x,z)}{\mathfrak m(B_{{\sf d}(x,z)}(x))} + \frac{{\sf d}(y,z)}{\mathfrak m(B_{{\sf d}(y,z)}(y))}\right) \chi_{B_{x,y}^L},\end{equation*}

where $B_{x,y}^L = B_{L{\sf d}(x,y)}(x) \cup B_{L{\sf d}(x,y)}(y)$. At $x,y$ we impose by definition that $R_{x,y}^L(x) = R_{x,y}^L(y) = 0$. If the measure $\mathfrak m$ is doubling then $\mathfrak m_{x,y}^L({\rm X})$ is a finite measure, more precisely (see [Reference Caputo and Cavallucci12, proposition 2.3], whose proof does not use the completeness of $({\rm X},{\sf d})$)

(24)\begin{equation} \mathfrak m_{x,y}^L({\rm X}) \le 8C_DL{\sf d}(x,y). \end{equation}

Proof of proposition 7.4

If (i) holds then $({\rm X},{\sf d},\mathfrak m)$ satisfies a p-Poincaré inequality for all couples (u, g) with u Borel and $\mathfrak m$-measurable $g\in {\rm UG}^{\varepsilon}(u)$ for some ɛ > 0, by corollary 7.1. Then (ii) can be proved as in [Reference Heinonen21, theorem 9.5]. Indeed two things are needed: that $x,y$ are Lebesgue points of u and that the space ${\rm X}$ is geodesic. However, since $({\rm X},{\sf d},\mathfrak m)$ satisfies (i) then also the completion $(\bar{{\rm X}},\bar{{\sf d}},\bar{\mathfrak m})$ satisfy (i). Therefore, after a biLipschitz change of the metric $\bar{{\sf d}}$, we can suppose that $\bar{{\sf d}}$ is geodesic. In general, ${\sf d}$ is not geodesic, but by density, there are points of ${\rm X}$ arbitrarily close to every point of a fixed geodesic of $\bar{{\rm X}}$. Therefore the proof of [Reference Heinonen21, theorem 9.5] can be easily adapted.

Suppose (ii) holds and let $u\in {\rm Lip}({\rm X})$. We have ${\rm Leb}(u) = {\rm X}$, since u is continuous. Moreover, by lemma 4.3, ${\rm sl}_{\varepsilon}u \in {\rm UG}^{\varepsilon}(u)$ for every ɛ > 0. Therefore (22) implies that $\vert u(x) - u(y) \vert^p \le C{\sf d}(x,y)^{p-1}\int ({\rm sl}_{\varepsilon}u)^p \,{\mathrm d}\mathfrak m_{x,y}^L$ for every $x,y\in {\rm X}$ and every ɛ > 0. By dominated convergence, thanks to the fact that $\mathfrak m_{x,y}^L({\rm X}) \lt +\infty$ by (24), we get (23), so (iii) holds.

If (iii) holds then (i) holds by a combination of [Reference Heinonen21, theorem 9.5] and [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, theorem 8.1.7].

Remark. The pointwise estimate of item (ii) cannot hold at every point. Indeed let $({\rm X},{\sf d},\mathfrak m) = (\mathbb{R},{\sf d}_e,\mathcal{L})$, $u=\chi_{0}$ and $g=+\infty\cdot\chi_0 \in {\rm UG}^{\varepsilon}(u)$, for every ɛ > 0. If x = 0 and y = 1 then $+\infty = \vert u(x)-u(y)\vert$, while $\int g^p\,{\mathrm d}\mathfrak m_{x,y}^L = 0$. This is in contrast with the case of upper gradients along curves when $({\rm X},{\sf d})$ is complete. Indeed, even if a priori one gets the pointwise estimate with respect to every upper gradient only on the Lebesgue points of u, see [Reference Heinonen21, theorem 9.5] and [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, theorem 8.1.7], in [Reference Caputo and Cavallucci11, theorem A.3] we showed that it actually holds everywhere.

The pointwise estimate in item (ii) of proposition 7.4 holds everywhere for chain upper gradients that assume finite values at x and y. This is established in the next result.

Proposition 7.6. Let $({\rm X},{\sf d},\mathfrak m)$ be a doubling metric measure space. Let $x,y\in {\rm X}$. The following properties are quantitatively equivalent:

  1. (i) there exist C > 0 and $L\ge 1$ such that (23) holds for every $u\in {\rm Lip}({\rm X})$;

  2. (ii) there exist C > 0 and $L\ge 1$ such that for every $\mathfrak m$-measurable function $g\colon {\rm X}\to [0,+\infty]$ with $g(x),g(y) \lt +\infty$ it holds

    (25)\begin{equation} \lim_{\varepsilon \to 0} \inf_{\substack{{\sf c} \in \mathscr{C}^{\varepsilon}_{x,y}\\ \ell({\sf c}) \le C{\sf d}(x,y)}} \left(\int_{{\sf c}} g\right)^p \le C{\sf d}(x,y)^{p-1}\int g^p\,{\mathrm d}\mathfrak m_{x,y}^L. \end{equation}
  3. (iii) there exist C > 0 and $L\ge 1$ such that (22) holds for every Borel $u\colon {\rm X} \to \mathbb{R}$, for every ɛ > 0 and for every $\mathfrak m$-measurable $g\in {\rm UG}^{\varepsilon}(u)$ such that $g(x),g(y) \lt +\infty$;

Proof of proposition 7.6

If (ii) holds, (25) and the chain upper gradient inequality gives

\begin{equation*}\vert u(x) - u(y) \vert^p \le \lim_{\varepsilon \to 0} \inf_{\substack{{\sf c} \in \mathscr{C}^{\varepsilon}_{x,y}\\ \ell({\sf c}) \le C{\sf d}(x,y)}} \left(\int_{{\sf c}} g\right)^p \le C{\sf d}(x,y)^{p-1}\int g^p\,{\mathrm d}\mathfrak m_{x,y}^L,\end{equation*}

which is (iii).

If (iii) holds and $u \in {\rm Lip}({\rm X})$, (22) applied to ${\rm sl}_{\varepsilon}u \in {\rm UG}^{\varepsilon}(u)$ by lemma 4.3, gives

\begin{equation*}\vert u(x) - u(y) \vert^p \le C{\sf d}(x,y)^{p-1} \int ({\rm sl}_{\varepsilon}u)^p\,{\mathrm d}\mathfrak m_{x,y}^L.\end{equation*}

Therefore (i) follows by applying dominated convergence.

It remains to prove the implication (i) $\Rightarrow$ (ii). For every ɛ > 0 we denote by ${\rm Y}_\varepsilon$ the ɛ-chain connected component of ${\rm X}$ containing x.

Reduction to the case $y \in {\rm Y}_\varepsilon$ for every ɛ > 0. We assume that y does not belong to ${\rm Y}_{\bar{\varepsilon}}$ for some $\bar{\varepsilon} \gt 0$. If this is the case, condition (i) does not hold and thus the implication holds trivially true. Indeed, if $y\notin {\rm Y}_{\bar{\varepsilon}}$ then the function $u \equiv 0$ on ${\rm Y}_{\bar{\varepsilon}}$ and $u\equiv 1$ on $(X\setminus {\rm Y}_{\bar{\varepsilon}})$ is Lipschitz, since ${\sf d}({\rm Y}_{\bar{\varepsilon}}, {\rm X} \setminus {\rm Y}_{\bar{\varepsilon}}) \gt \bar{\varepsilon}$ by (4), has ${\rm lip}\,u \equiv 0$ and contradicts (i).

Reduction to the case $({\rm X},{\sf d})$ is complete. We claim that properties (i) and (ii) hold on $({\rm X},{\sf d},\mathfrak m)$ if and only if they hold on the completion $(\bar{{\rm X}}, \bar{{\sf d}}, \bar{\mathfrak m})$. If they hold on ${\rm X}$ then they clearly hold on $\bar{{\rm X}}$. The vice versa in the case of (i) is given arguing as in proposition 3.3.

Regarding (ii) we argue as follows: given a $\mathfrak m$-measurable function $g\colon {\rm X} \to [0,+\infty]$ such that $g(x),g(y) \lt +\infty$ we extend it to $\bar{g} \colon \bar{{\rm X}} \to [0,+\infty]$ setting $\bar{g} \equiv +\infty$ on $\bar{{\rm X}} \setminus {\rm X}$. Observe that $\bar{g}$ is $\bar{\mathfrak m}$-measurable by (2). If $\int g^p\,{\mathrm d}\mathfrak m_{x,y}^L = +\infty$ there is nothing to prove. Otherwise, condition (ii) on $\bar{{\rm X}}$ gives that for every η > 0 there exists ɛ > 0 and a chain $c_\eta \in \mathscr{C}^{\varepsilon}_{x,y}$ such that $\ell(c_\eta) \le C{\sf d}(x,y)$ and $\left(\int_{{\sf c}_\eta} \bar{g} \right)^p \le C{\sf d}(x,y)^{p-1}\int \bar{g}^p\,{\mathrm d}\bar{\mathfrak m}_{x,y}^L + \eta \lt +\infty$. Then $c_\eta \subseteq {\rm X}$ for every η > 0, since if not we would have $\int_{{\sf c}_\eta} \bar{g} = +\infty$. Therefore (25) holds for g on ${\rm X}$.

From now on, we assume that $({\rm X},{\sf d})$ is complete and $x,y$ belongs to the same ɛ-chain connected component ${\rm Y}_\varepsilon$ for every ɛ > 0.

Main argument. We introduce the additional conditions (i) $_{{\rm Lip}_{\rm loc}}$ there exist C > 0, $L\ge 1$ such that (22) holds for every $u\in {\rm Lip}_{\rm loc}({\rm X})$, every ɛ > 0 and every $g\in {\rm LUG}^{\varepsilon}(u)$ bounded; (ii) $_{{\rm Lip}}$ there exist C > 0 and $L\ge 1$ such that for every $g\in {\rm Lip}({\rm X})$, $g\ge 0$ and bounded, and for every ɛ > 0 it holds that

\begin{equation*} \inf_{\substack{{\sf c} \in \mathscr{C}^{\varepsilon}_{x,y}\\ \ell({\sf c}) \le C{\sf d}(x,y)}} \left(\int_{{\sf c}} g\right)^p \le C{\sf d}(x,y)^{p-1}\int g^p\,{\mathrm d}\mathfrak m_{x,y}^L. \end{equation*}

We end the proof of the theorem by showing the following chains of implications: (i) $\Rightarrow$ (i) $_{{\rm Lip}_{\rm loc}}$ $\Rightarrow$ (ii) $_{\rm Lip}$ $\Rightarrow$ (ii).

Suppose (i) holds, let $u \in {\rm Lip}_{\rm loc}({\rm X})$ and $g\in {\rm LUG}^{\varepsilon}(u)$. lemma 4.3 says that ${\rm lip}\,u \le g$. The function u is Lipschitz on the compact set $\overline{B}_{x,y}^L$, because of [Reference Beer and Isabel Garrido10, theorem 4.2]. By McShane Extension Theorem we can find a Lipschitz map $\hat{u} \in {\rm Lip}({\rm X})$ which coincides with u on $\overline{B}_{x,y}^L$. Applying (i), and using that ${\rm lip}\,\hat{u} = {\rm lip}\,u$ $\mathfrak m_{x,y}^L$-a.e., we get

\begin{equation*} \begin{aligned} \vert u(x) - u(y) \vert^p = \vert \hat{u}(x) - \hat{u}(y) \vert^p &\le C{\sf d}(x,y)^{p-1}\int ({\rm lip}\,\hat{u})^p\,{\mathrm d}\mathfrak m_{x,y}^L\\ &= C{\sf d}(x,y)^{p-1}\int ({\rm lip}\,{u})^p\,{\mathrm d}\mathfrak m_{x,y}^L \\ &\le C{\sf d}(x,y)^{p-1}\int g^p\,{\mathrm d}\mathfrak m_{x,y}^L, \end{aligned} \end{equation*}

which proves (i) $_{{\rm Lip}_{\rm loc}}$.

To prove that (i) $_{{\rm Lip}_{\rm loc}} \Rightarrow$ (ii) $_{{\rm Lip}}$, we adapt the argument of [Reference Eriksson-Bique16, theorem 1.5]. We fix ɛ > 0 and a bounded $g\in {\rm Lip}({\rm X})$, with $g\ge 0$. We claim that (ii) $_{\rm Lip}$ holds with $C' = 2^{p+4}CC_DL$ and $L'= \max\{L,C'\}$. For every δ > 0 such that $\int_{{\rm X}} g^p\,{\mathrm d}\mathfrak m_{x,y}^{L} \lt \delta^p \mathfrak m_{x,y}^{L}({\rm X})$ we consider the function

\begin{equation*}u_\delta\colon {\rm X} \to [0,+\infty),\quad u_\delta(z) = \begin{cases} \inf\left\{\int_{\sf c} (g + \delta)\,:\, {\sf c} \in \mathscr{C}^{\varepsilon}({\rm X}),\right. &\text{if } z \in {\rm Y}_\varepsilon,\\ \qquad\left. \alpha({\sf c}) = x,\,\omega({\sf c})=z\right\}&\\ 0 &\text{otherwise}. \end{cases} \end{equation*}

With usual techniques it is possible to show that uδ is $(\sup_{\rm X} g + \delta)$-Lipschitz up to scale ɛ, i.e. if ${\sf d}(z,w) \le \varepsilon$ then $\vert u_\delta(z) - u_\delta(w)\vert \le (\sup_{\rm X} g + \delta) {\sf d}(z,w)$. Moreover, $(g + \delta)\in {\rm LUG}^{\varepsilon}(u_\delta)$. Condition (i) $_{{\rm Lip}_{\rm loc}}$ applied to the couple $(u_\delta,g + \delta)$ implies that

\begin{equation*}u_\delta(y)^p \le C{\sf d}(x,y)^{p-1}\int_{{\rm X}} (g + \delta)^p\,{\mathrm d}\mathfrak m_{x,y}^L.\end{equation*}

By definition of uδ we have that $\int_{{\rm X}} (g + \delta)^p\,{\mathrm d}\mathfrak m_{x,y}^{L} \gt 0$ and that we can find chains ${\sf c}_\delta \in \mathscr{C}^{\varepsilon}_{x,y}({\rm X})$ such that

\begin{equation*}\left(\int_{{\sf c}_\delta} (g + \delta)\right)^p \le 2C{\sf d}(x,y)^{p-1}\int_{{\rm X}} (g + \delta)^p\,{\mathrm d}\mathfrak m_{x,y}^L.\end{equation*}

Moreover, using that $(g(z)+\delta)^p \le 2^{p-1}(g(z)^p + \delta^p)$ for all $z \in {\rm X}$, we have

(26)\begin{equation} \begin{aligned} \delta^p\ell({\sf c}_\delta)^p \le \left(\int_{{\sf c}_\delta} (g + \delta)\right)^p &\le 2^{p}C{\sf d}(x,y)^{p-1}\left(\int_{{\rm X}} g^p \,{\mathrm d}\mathfrak m_{x,y}^L + \delta^p\mathfrak m_{x,y}^L({\rm X})\right) \\ &\le 2^{p + 1}C{\sf d}(x,y)^{p-1}\left( \delta^p\mathfrak m_{x,y}^L({\rm X})\right)\\ &\stackrel{24}{\le} 2^{p+1}C\cdot 8C_DL{\sf d}(x,y)^p \delta^p. \end{aligned} \end{equation}

This implies that $\ell({\sf c}_\delta) \le C'{\sf d}(x,y)$ for every δ.

If $\int_{{\rm X}} g^p\,{\mathrm d}\mathfrak m_{x,y}^{L'} \gt 0$ then, by choosing δ such that $\delta^p\mathfrak m_{x,y}^L({\rm X}) \lt 2\int_{{\rm X}} g^p \,{\mathrm d}\mathfrak m_{x,y}^{L'}$, we have that the chain ${\sf c}_\delta$ satisfies

\begin{equation*}\left(\int_{{\sf c}_\delta} g\right)^p \le 3\cdot2^{p+1}C{\sf d}(x,y)^{p-1}\int_{{\rm X}} g^p\,{\mathrm d}\mathfrak m_{x,y}^{L'} \le C'{\sf d}(x,y)^{p-1}\int_{{\rm X}} g^p\,{\mathrm d}\mathfrak m_{x,y}^{L'}\end{equation*}

and one can take ${\sf c} = {\sf c}_\delta$ to get the thesis.

If $\int_{{\rm X}} g^p\,{\mathrm d}\mathfrak m_{x,y}^{L'} = 0$, so $g\equiv 0$ on $B_{x,y}^{L'}$ since g is Lipschitz, we argue as follows. By (26) we have the existence of a chain ${\sf c} \in \mathscr{C}^{\varepsilon}_{x,y}({\rm X})$ with $\ell({\sf c}) \le C'{\sf d}(x,y)$, so ${\sf c} \subseteq B_{x,y}^{C'} \subseteq B_{x,y}^{L'}$. Therefore, $\int_{{\sf c}} g = 0$ and (ii) $_{\rm Lip}$ holds also in this case.

Suppose (ii) $_{\rm Lip}$ holds. Since ${\rm X}$ is complete, we can use proposition 2.2 and lemma 2.7 to show that for every $g\in {\rm Lip}({\rm X})$, $g\ge 0$ and bounded, there exists a curve $\gamma \in \Gamma_{x,y}$ with $\ell(\gamma) \le C{\sf d}(x,y)$ and such that $\left(\int_\gamma g\right)^p \le C {\sf d}(x,y)^{p-1}\int g^p \,{\mathrm d}\mathfrak m_{x,y}^L$. This is condition (iii) of [Reference Caputo and Cavallucci11, theorem A.3] which is equivalent to the following: for every Borel $g\colon {\rm X} \to [0,+\infty]$ there exists $\gamma \in \Gamma_{x,y}$ such that $\ell(\gamma) \le C{\sf d}(x,y)$ and $\left(\int_\gamma g\right)^p \le C {\sf d}(x,y)^{p-1}\int g^p \,{\mathrm d}\mathfrak m_{x,y}^L$. If moreover $g(x), g(y) \lt +\infty$ we can use proposition 2.8 to find chains ${\sf c}_j \in \mathscr{C}^{\frac{1}{j}}_{x,y}$ such that $\int_\gamma g \ge \varlimsup_{j\to +\infty} \int_{{\sf c}_j}g$. This implies (ii) for Borel functions g. The usual application of Vitali-Carathéodory’s Theorem [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, p.108] to the metric measure space $({\rm X},{\sf d},\mathfrak m_{x,y}^L)$ and monotone convergence theorem for decreasing sequences gives (ii) for every $\mathfrak m$-measurable function.

Remark 7.7. The proof above shows that the conditions of proposition 7.6 are equivalent to

(iii) $_{\rm UG}^{}$ there exist C > 0 and $L\ge 1$ such that for every Borel $u\colon {\rm X} \to \mathbb{R}$ and for every $ g\in {\rm UG}^{}(u) \ \text{it holds} \ \vert u(x) - u(y) \vert ^p \le C{\sf d}(x,y)^{p-1}\int g^p \,{\mathrm d}\mathfrak m_{x,y}^L,$ in case $({\rm X},{\sf d})$ is complete. Indeed if (iii) $_{{\rm UG}^{}}$ holds then (i) holds because ${\rm lip}\,u \in {\rm UG}^{}(u)$ if $u\in {\rm Lip}({\rm X})$. Vice versa, in the proof we showed that the conditions of proposition 7.6 are also equivalent to condition (iii) of [Reference Caputo and Cavallucci11, theorem A.3], which is in turn equivalent to (iii) $_{{\rm UG}^{}}$ by the same [Reference Caputo and Cavallucci11, theorem A.3].

This generalizes the result of [Reference Caputo and Cavallucci11, theorem A.3], in which the implication from item (i) of proposition 7.6 and (iii) $_{\rm UG}^{}$ is proved under the additional assumption of local quasiconvexity of the space. By using chains as we did, we are able to remove this assumption and to show the equivalence of pointwise estimates in general.

The reason behind this improvement is the following. A standard technique, that we used also in the proof of proposition 7.6, consists in taking a bounded function g and in associating the functions

\begin{equation*}u_{\text{curve}}(z) := \inf\left\{\int_\gamma g \,:\, \gamma \ \text{curve}, \alpha(\gamma)=x,\,\omega(\gamma)=z \right\},\end{equation*}
\begin{equation*}u_{\varepsilon\text{-chain}}(z) := \inf\left\{\int_{\sf c} g \,:\, {\sf c}\in \mathscr{C}^{\varepsilon}, \alpha({\sf c})=x,\,\omega({\sf c})=z\right\}.\end{equation*}

As showed in the proof of proposition 7.6, using the discrete nature of chains, it is possible to say that $u_{\varepsilon\text{-chain}}$ is locally Lipschitz, actually Lipschitz up to scale ɛ, on the ɛ-chain connected component containing x. On the other hand, in order to say that u curve is locally Lipschitz one needs some connectivity property of the metric space ${\rm X}$, as the local quasiconvexity.

7.2. Keith’s characterization via chains

The Poincaré inequality with upper gradients can be characterized via modulus estimates, see [Reference Keith24, theorem 2] and [Reference Caputo and Cavallucci11, proposition A.1]. We will show a similar statement for chains. Let $({\rm X},{\sf d},\mathfrak m)$ be a metric measure space.

Let $\mathcal{F}$ be a family of Borel functions on ${\rm X}$. Given a family of chains ${\sf C}$, the $(\varepsilon,p)$-modulus of ${\sf C}$ with respect to $\mathcal{F}$ is defined as

\begin{equation*}{\mathscr C}\text{-}{\rm Mod}_{\varepsilon}^{p}({\sf C}, \mathcal{F}, \mathfrak m) := \inf \left\{ \int \rho^p\,{\mathrm d}\mathfrak m\,:\, \rho \in {\rm Adm}^\varepsilon({\sf C}) \cap \mathcal{F}\right\},\end{equation*}

where we recall that

\begin{equation*}{\rm Adm}^{\varepsilon}({{\sf C}})=\left\{\rho \ge 0\,:\, \rho \ \text{Borel, }\int_{{\sf c}} \rho \ge 1 \ \text{for every } {\sf c} \in {{\sf C}} \cap \mathscr{C}^{\varepsilon} \right\}.\end{equation*}

If $\mathcal{F}$ is closed under finite sums, the same proof of proposition 5.1 shows that the assignment ${\sf C} \mapsto {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}({\sf C}, \mathcal{F}, \mathfrak m)$ satisfies

(27)\begin{equation} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}\left(\bigcup_{i\in I}{\sf C}_i, \mathcal{F} , \mathfrak m\right) \le \sum_{i\in I} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}\left({\sf C}_i, \mathcal{F}, \mathfrak m\right) \end{equation}

for a finite set of indices I. In general it does not define an outer measure. Notice that $ (0,+\infty) \ni \varepsilon \mapsto {\mathscr C}\text{-}{\rm Mod}_{\varepsilon}^{p}({\sf C}, \mathcal{F}, \mathfrak m) \in [0,+\infty]$ is non-decreasing.

We also recall the definition of the p-modulus of a family of curves. Let Γ be a family of curves and let $\mathcal{F}$ be a family of Borel functions. Then

\begin{equation*}\textrm{Mod}_p(\Gamma, \mathcal{F}, \mathfrak m) := \inf\left\{\int \rho^p \,{\mathrm d}\mathfrak m\,:\, \rho \in {\rm Adm}(\Gamma) \cap \mathcal{F}\right\},\end{equation*}

where ${\rm Adm}(\Gamma) := \{\rho \colon {\rm X} \to [0,+\infty] \, : \, \int_\gamma \rho \ge 1 \text{for all } \gamma \in \Gamma\}$. If $\mathcal{F}$ is the class of all Borel functions, then we simply write $\textrm{Mod}_p(\Gamma, \mathfrak m)$.

For the next result we define $\mathcal{F}_{x,y} := \{g \colon {\rm X} \to \mathbb{R} \,:\, g \ \text{is Borel, } g(x),g(y) \lt +\infty\}$, for $x,y\in {\rm X}$.

Proposition 7.8. Let $({\rm X},{\sf d},\mathfrak m)$ be a doubling metric measure space such that $({\rm X},{\sf d})$ is complete, $x,y\in {\rm X}$ and $L\ge 1$. Then

\begin{align*}\textrm{Mod}_p(\Gamma_{x,y}, \mathfrak m_{x,y}^L) &= \lim_{\varepsilon \to 0} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, \mathcal{F}_{x,y} ,\mathfrak m_{x,y}^L) \\ &= \lim_{\varepsilon \to 0} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, {\rm Lip}({\rm X}), \mathfrak m_{x,y}^L).\end{align*}

Proof. During this proof we use the notation $\mathscr{C}^{\varepsilon,\Lambda}_{x,y}$ to denote the family of chains ${\sf c} \in \mathscr{C}^{\varepsilon}_{x,y}$ such that $\ell({\sf c}) \le \Lambda{\sf d}(x,y)$. In the same way, $\Gamma_{x,y}^\Lambda$ denotes the family of rectifiable curves with $\alpha(\gamma) = x$, $\omega(\gamma) = y$ and $\ell(\gamma) \le \Lambda {\sf d}(x,y)$. We want to show

\begin{equation*} \lim_{\varepsilon \to 0} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, {\rm Lip}({\rm X}),\mathfrak m_{x,y}^L) \le \textrm{Mod}_p(\Gamma_{x,y}, \mathfrak m_{x,y}^L). \end{equation*}

We fix δ > 0. The same proof of [Reference Caputo and Cavallucci12, lemma A.2], together with (27), shows that we can find $\Lambda \ge 1$ such that ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, {\rm Lip}({\rm X}) ,\mathfrak m_{x,y}^L) \le {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{\varepsilon, \Lambda}_{x,y}, {\rm Lip}({\rm X}) ,\mathfrak m_{x,y}^L) + \delta$. We consider the compact family of curves $\Gamma_{x,y}^\Lambda$. By [Reference Keith24, proposition 6] we have $\textrm{Mod}_p(\Gamma_{x,y}^\Lambda, \mathfrak m_{x,y}^L) = \textrm{Mod}_p(\Gamma_{x,y}^\Lambda, {\rm Lip}({\rm X}), \mathfrak m_{x,y}^L)$. Let $\rho \in {\rm Adm}(\Gamma_{x,y}^\Lambda) \cap {\rm Lip}({\rm X})$. We claim that

(28)\begin{equation} \lim_{\varepsilon \to 0} \inf_{{\sf c} \in \mathscr{C}^{\varepsilon, \Lambda}_{x,y}} \int_{\sf c} \rho \ge 1. \end{equation}

Assuming the claim holds true, this implies that for every η > 0 there exists $\varepsilon_\eta \gt 0$ such that if $\varepsilon \le \varepsilon_\eta$ then $(1+\eta)\rho \in {\rm Adm}^\varepsilon(\mathscr{C}^{\varepsilon,\Lambda}_{x,y})$. Hence

\begin{equation*}\lim_{\varepsilon \to 0} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{\varepsilon, \Lambda}_{x,y}, {\rm Lip}({\rm X}) ,\mathfrak m_{x,y}^L) \le \lim_{\eta \to 0} \int (1+\eta)\rho^p\,{\mathrm d}\mathfrak m_{x,y}^L = \int \rho^p\,{\mathrm d}\mathfrak m_{x,y}^L.\end{equation*}

By the arbitrariness of ρ, we would get

\begin{equation*}\lim_{\varepsilon \to 0} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, {\rm Lip}({\rm X}) ,\mathfrak m_{x,y}^L) \le \textrm{Mod}_p(\Gamma_{x,y}^\Lambda, \mathfrak m_{x,y}^L) + \delta \le \textrm{Mod}_p(\Gamma_{x,y}, \mathfrak m_{x,y}^L) + \delta\end{equation*}

By taking δ → 0 we would conclude that

\begin{equation*}\lim_{\varepsilon \to 0} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, {\rm Lip}({\rm X}) ,\mathfrak m_{x,y}^L) \le \textrm{Mod}_p(\Gamma_{x,y}, \mathfrak m_{x,y}^L).\end{equation*}

We prove the claim. Suppose (28) is not true. Then there exists η > 0 and chains ${\sf c}_\varepsilon \in \mathscr{C}^{\varepsilon,\Lambda}_{x,y}$ such that $\int_{{\sf c}_{\varepsilon}} \rho \lt 1-\eta$, for every ɛ sufficiently small. We are in position to apply proposition 2.2 in order to find a curve $\gamma \in \Gamma_{x,y}^\Lambda$ such that ${\sf c}_{\varepsilon}$ subconverges to γ as ɛ → 0. By lemma 2.7 we have

\begin{equation*}\int_\gamma \rho \le \varliminf_{\varepsilon \to 0} \int_{{\sf c}_\varepsilon} \rho \lt 1-\eta,\end{equation*}

which is a contradiction to the fact that $\rho \in {\rm Adm}(\Gamma_{x,y}^\Lambda)$. This concludes the proof of the claim.

The inequality

\begin{equation*}{\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, \mathcal{F}_{x,y} ,\mathfrak m_{x,y}^L) \le {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, {\rm Lip}({\rm X}), \mathfrak m_{x,y}^L)\end{equation*}

holds trivially for every ɛ > 0. It remains to show that

(29)\begin{equation} \textrm{Mod}_p(\Gamma_{x,y}, \mathfrak m_{x,y}^L) \le \lim_{\varepsilon \to 0} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, \mathcal{F}_{x,y} ,\mathfrak m_{x,y}^L). \end{equation}

Let $\rho \in {\rm Adm}^\varepsilon(\mathscr{C}^{}_{x,y}) \cap \mathcal{F}_{x,y}$. We claim that $\rho \in {\rm Adm}(\Gamma_{x,y})$. Let $\gamma \in \Gamma_{x,y}$. If $\int_\gamma \rho = +\infty$ there is nothing to prove. Otherwise, applying proposition 2.8, we have that

\begin{equation*}\int_\gamma \rho \ge \varlimsup_{j\to +\infty} \int_{{\sf c}_{t,n_j}} \rho \ge 1,\end{equation*}

for ${\sf c}_{t,n_j} \in \mathscr{C}^{\frac{1}{n_j}}_{x,y}$ defined therein. By the arbitrariness of ρ we have

\begin{equation*}\textrm{Mod}_p(\Gamma_{x,y}, \mathfrak m_{x,y}^L) \le {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, \mathcal{F}_{x,y} ,\mathfrak m_{x,y}^L),\end{equation*}

for every ɛ > 0. By taking the limit as ɛ → 0, we obtain (29) and we conclude the proof.

Remark 7.9. Notice that the statement of proposition 7.8 cannot be formulated with the class $\mathcal{F} = \{g\colon {\rm X} \to [0,+\infty]\,:\, g \text{Borel}\}$. Indeed, observe that ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y},\mathfrak m_{x,y}^L) = 0$ as soon as $\mathfrak m(\{x,y\}) = 0$, because of lemma 5.5, while ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, \mathcal{F}_{x,y} ,\mathfrak m_{x,y}^L)$ can be different from 0, because of proposition 7.8. The difference is due to the fact that ${\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\cdot, \mathcal{F}_{x,y} ,\mathfrak m_{x,y}^L)$ is not an outer measure.

Moreover, denoting by $\mathcal{F}_{x,y}^{\text{meas}} := \{g\colon {\rm X} \to [0,+\infty]\,:\, g \text{is } \mathfrak m\text{-measurable},\, g(x),g(y) \lt +\infty\}$, we have that

\begin{equation*}{\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, \mathcal{F}_{x,y} ,\mathfrak m_{x,y}^L) = {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, \mathcal{F}_{x,y}^{\text{meas}} ,\mathfrak m_{x,y}^L)\end{equation*}

for every metric measure space, even non complete, by Vitali-Carathéodory’s Theorem (cp. [Reference Heinonen, Koskela, Shanmugalingam and Tyson22, p.108]). Furthermore,

\begin{equation*} \begin{aligned} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, \mathcal{F}_{x,y}^\text{meas} ,\mathfrak m_{x,y}^L) &= {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, \mathcal{F}_{x,y}^{\text{meas}} ,\bar{\mathfrak m}_{x,y}^L)\\ {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, {\rm Lip}({\rm X}) ,\mathfrak m_{x,y}^L) &= {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, {\rm Lip}(\bar{{\rm X}}) ,\bar{\mathfrak m}_{x,y}^L), \end{aligned} \end{equation*}

where the right hand sides are computed on the metric measure space $(\bar{{\rm X}}, \bar{{\sf d}}, \bar{\mathfrak m})$. In particular

\begin{equation*}\lim_{\varepsilon \to 0} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, \mathcal{F}_{x,y} ,\mathfrak m_{x,y}^L) = \lim_{\varepsilon \to 0} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, {\rm Lip}({\rm X}), \mathfrak m_{x,y}^L)\end{equation*}

holds true in every doubling metric measure space without the completeness assumption.

Corollary 7.10. Let $({\rm X},{\sf d},\mathfrak m)$ be a doubling metric measure space. Let $x,y\in {\rm X}$. Then the conditions of proposition 7.6 are equivalent to the following. There exists c > 0, $L\ge 1$ such that

\begin{equation*}\lim_{\varepsilon \to 0} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, \mathcal{F}_{x,y} ,\mathfrak m_{x,y}^L) = \lim_{\varepsilon \to 0} {\mathscr C}\text{-}{\rm Mod}_{p}^{\varepsilon}(\mathscr{C}^{}_{x,y}, {\rm Lip}({\rm X}), \mathfrak m_{x,y}^L) \ge c{\sf d}(x,y)^{1-p}.\end{equation*}

Proof. As noticed in the proof of proposition 7.6 and remark 7.9, the conditions hold if and only if they hold on the metric completion $(\bar{X},\bar{{\sf d}},\bar{\mathfrak m})$. Therefore the result follows by proposition 7.8 and proposition A.1]CaputoCavallucci2024II, see also the original [Reference Keith24, theorem 2].

7.3. Energy of separating sets via chains

In the case p = 1 we can extend our characterizations in [Reference Caputo and Cavallucci11] to possibly non complete metric spaces. We need the notion of chain width of a given set $A \subset {\rm X}$, which is

\begin{equation*} \mathscr{C}^{}\textrm{-}{\rm width}_{x,y}(A):= \lim_{\varepsilon \to 0}\inf_{{\sf c} \in \mathscr{C}^{\varepsilon}_{x,y}} \int_{{\sf c}} \chi_A. \end{equation*}

We also need to recall the notion of separating set and of Minkowski content. Given $x,y\in {\rm X}$ we say that a set $\Omega \subseteq {\rm X}$ is separating if it is closed, x belongs to the interior of Ω and y belongs to $\Omega^c$. The family of separating sets between x and y is denoted by ${\rm SS}_{\textrm{top}}(x,y)$.

Given a measurable subset A of a metric measure space $({\rm X},{\sf d},\mathfrak m)$, we define its Minkowski content by

\begin{equation*}\mathfrak m^+(A) := \varliminf_{r\to 0} \frac{\mathfrak m(B_r(A)\setminus A)}{r}.\end{equation*}

The following theorem is the chain version of [Reference Caputo and Cavallucci11, theorem 1.4] and it is suited for the case p = 1.

Theorem 7.11. Let $({\rm X},{\sf d},\mathfrak m)$ be a doubling metric measure space. Let $x,y \in {\rm X}$. Then the following conditions are quantitatively equivalent:

  1. (i) there exist C > 0, $L \ge 1$ such that (23) holds for every $u \in {\rm Lip}({\rm X})$;

  2. (ii) there exist C > 0, $L \ge 1$ such that for every $\mathfrak m$-measurable $A \subseteq {\rm X}$ it holds

    \begin{equation*} \mathscr{C}^{}\textrm{-}{\rm width}_{x,y}(A) \le C \mathfrak m_{x,y}^L(A). \end{equation*}
  3. (iii) there exist c > 0, $L \ge 1$ such that for every $\Omega \in {\rm SS}_{\textrm{top}}(x,y)$ it holds $(\mathfrak m_{x,y}^L)^+(\Omega) \ge c$.

Proof. If (i) holds then item (ii) of proposition 7.6 holds. Applying it to the $\mathfrak m$-measurable function $g=\chi_A$, where $A\subseteq {\rm X}$ is $\mathfrak m$-measurable, we get that

\begin{equation*}\mathscr{C}^{}\textrm{-}{\rm width}_{x,y}(A) \le \lim_{\varepsilon \to 0} \inf_{\substack{{\sf c} \in \mathscr{C}^{\varepsilon}_{x,y} \\ \ell({\sf c}) \le C{\sf d}(x,y)}} \int_{{\sf c}} \chi_A \le C\int \chi_A\,{\mathrm d}\mathfrak m_{x,y}^L = C\mathfrak m_{x,y}^L(A).\end{equation*}

This shows (ii).

We now assume (ii) and we consider $\Omega \in {\rm SS}_{\rm top}(x,y)$. Let $0 \lt r \lt \min\{{\sf d}(x,\partial\Omega), {\sf d}(y,\partial\Omega)\}$. For $\varepsilon \lt r$, let ${\sf c} = \{q_i\}_{i=0}^N \in \mathscr{C}^{\varepsilon}_{x,y}$ and let ${\sf c}' = (q_m,\ldots,q_M)$ be a maximal subchain such that $q_i \in B_r(\Omega)\setminus \Omega$ for every $i=m,\ldots,M$. Therefore we have

\begin{equation*}\int_{{\sf c}}\chi_{B_r(\Omega)\setminus \Omega} \geq \int_{{\sf c}'}\chi_{B_r(\Omega)\setminus \Omega} \geq r-2\varepsilon,\end{equation*}

by maximality of ${\sf c}'$. By taking the limit for ɛ going to zero we find

(30)\begin{equation} \mathscr{C}^{}\textrm{-}{\rm width}_{x,y}(B_r(\Omega)\setminus \Omega) \geq \limsup_{\varepsilon \to 0} (r - 2\varepsilon) = r. \end{equation}

Hence we compute

\begin{equation*}(\mathfrak m_{x,y}^L)^+(\Omega) = \varliminf_{r \to 0}\frac{\mathfrak m_{x,y}^L(B_r(\Omega)\setminus \Omega)}{r} \stackrel{30}{\geq} \varliminf_{r \to 0}\frac{\mathfrak m_{x,y}^L(B_r(\Omega)\setminus \Omega)}{\mathscr{C}^{}\textrm{-}{\rm width}_{x,y}(B_r(\Omega)\setminus \Omega)} \geq \frac{1}{C}.\end{equation*}

This proves (iii).

It remains only to prove (iii) implies (i). This is the proof of the last implication in [Reference Caputo and Cavallucci12, theorem 6.1] that we report for completeness. Let $u\colon {\rm X} \to \mathbb{R}$, $u\geq 0$ be a bounded Lipschitz function and let $x,y\in {\rm X}$. We can assume that $u(x) \lt u(y)$ otherwise there is nothing to prove. The sets $\Omega_t := \lbrace u \geq t \rbrace$ belong to ${\rm SS}_\text{top}(x,y)$ for all $t\in (u(x),u(y))$. So we can apply the coarea inequality for the Minkowski content (see [Reference Ambrosio, Di Marino and Gigli3, lemma 3.2]) with respect to the measure $\mathfrak m_{x,y}^L$ to get

\begin{equation*}c\, \vert u(x) - u(y) \vert \leq \int_{u(x)}^{u(y)} (\mathfrak m_{x,y}^L)^{+}(\lbrace u \geq t \rbrace) \,{\mathrm d} t \leq \int_{\rm X} {\rm lip}\,u \,{\mathrm d}\mathfrak m_{x,y}^L.\end{equation*}

Therefore item (i) follows with $C = 1/c$ for Lipschitz, nonnegative, bounded functions. A standard approximation argument gives the same estimate for all Lipschitz functions.

Remark 7.12. Condition (iii) of theorem 7.11 is denoted by (BMC) $_{x,y}$ in [Reference Caputo and Cavallucci11], meaning ‘big Minkowski content’. If it holds for every couple of points of ${\rm X}$ with same constants we say that $({\rm X},{\sf d},\mathfrak m)$ satisfies property (BMC). This property has been studied in [Reference Caputo and Cavallucci12], where it is shown to be equivalent to the 1-Poincaré inequality if $({\rm X},{\sf d}, \mathfrak m)$ is complete and doubling. Other properties regarding the boundary of separating sets have been studied there, and they are all equivalent to the 1-Poincaré inequality in case of complete, doubling metric measure spaces. In the non-complete case, this is not true. This happens since a separating set can have empty boundary like in the following example. Let ${\rm X}=\mathbb{R}^2\setminus \{x=0\}$. We consider $A =(-1,0)$ and $B=(1,0)$ and $\Omega:=(-\infty, 0) \times \mathbb{R} \in {\rm SS}_{\rm top}(A,B)$. Here $\partial \Omega=\emptyset$. Moreover, ${\rm X}$ satisfies a 1-Poincaré inequality for all couples $(u,{\rm lip}\,u)$ with $u\in {\rm Lip}({\rm X})$, because of corollary 7.1. However, with the notation of [Reference Caputo and Cavallucci12, theorem 6.1], we have that the set Ω above does not satisfy (BH),(BH $_\textrm{R}$), (BHe), (BH $^e_\textrm{R}$), (BP), (BP $_\text{R}$), (BC), (BAM), (BAM)e and (BAM) $^\pitchfork$. The last condition (BAM) $^\pitchfork$ can be expressed via chains in a way that it is equivalent to (BMC).

Acknowledgements

The first author is supported by the European Union’s Horizon 2020 research and innovation programme (Grant Agreement No. 948021). We thank Panu Lahti for fruitful discussions about analysis on metric spaces which inspired us to start this project. We thank Pietro Wald for his comments during the preparation of this work. The authors thank the anonymous referee for their insightful comments.

Conflict of interest

All authors declare that they have no conflicts of interest.

Ethics approval

Our research primarily involves theoretical proofs and does not involve human participants, animal subjects, or sensitive data. As such, no formal ethics approval is typically required in such cases.

Funding

The first author is funded by the European Union’s Horizon 2020 research and innovation programme (Grant Agreement No. 948021).

Data availability

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

References

Ahlfors, L. and Beurling, A.. Conformal invariants and function-theoretic null-sets. Acta Math. 83 (1950), 101129.10.1007/BF02392634CrossRefGoogle Scholar
Ambrosio, L. and Di Marino, S.. Equivalent definitions of bv space and of total variation on metric measure spaces. Journal of Functional Analysis. 266 (2014), 41504188.10.1016/j.jfa.2014.02.002CrossRefGoogle Scholar
Ambrosio, L., Di Marino, S. and Gigli, N.. Perimeter as relaxed Minkowski content in metric measure spaces. Nonlinear Anal. 153 (2017), 7888.10.1016/j.na.2016.03.010CrossRefGoogle Scholar
Ambrosio, L., Gigli, N. and Savaré, G.. Density of Lipschitz functions and equivalence of weak gradients in metric measure spaces. Rev. Mat. Iberoam. 29 (2013), 969996.10.4171/rmi/746CrossRefGoogle Scholar
Ambrosio, L., Gigli, N. and Savaré, G.. Calculus and heat flow in metric measure spaces and applications to spaces with Ricci bounds from below. Invent. Math. 195 (2014), 289391.10.1007/s00222-013-0456-1CrossRefGoogle Scholar
Ambrosio, L., Ikonen, T., Lučić, D. and Pasqualetto, E.. Metric Sobolev spaces I: Equivalence of definitions. Milan J. Math. 92 (2024), 255347.10.1007/s00032-024-00407-7CrossRefGoogle Scholar
Bate, D.. Structure of measures in Lipschitz differentiability spaces. J. Amer. Math. Soc. 28 (2015), 421482.10.1090/S0894-0347-2014-00810-9CrossRefGoogle Scholar
Bate, D., Eriksson-Bique, S. and Soultanis, E.. Fragment-wise differentiable structures. (arXiv:2402.11284), 2024.Google Scholar
Bate, D. and Li, S.. Differentiability and Poincaré-type inequalities in metric measure spaces. Adv. Math. 333 (2018), 868930.10.1016/j.aim.2018.06.002CrossRefGoogle Scholar
Beer, G. and Isabel Garrido, M.. Locally Lipschitz functions, cofinal completeness, and UC spaces. J. Math. Anal. Appl. 428 (2015), 804816.10.1016/j.jmaa.2015.02.085CrossRefGoogle Scholar
Caputo, E., and Cavallucci, N.. A geometric approach to Poincaré inequality and Minkowski content of separating sets. Int. Math. Res. Not 1. (2025), Paper No. mae276, 30.10.1093/imrn/rnae276CrossRefGoogle Scholar
Caputo, E., and Cavallucci, N.. Poincaré inequality and energy of separating sets. Advances in Calculus of Variations 18 (3) (2025), 915942.10.1515/acv-2024-0109CrossRefGoogle Scholar
Caputo, E., Gigli, N. and Pasqualetto, E.. Parallel transport on non-collapsed ${RCD}(K,N)$ spaces. J. Reine Angew. Math. 819 (2025), 135204.10.1515/crelle-2024-0082CrossRefGoogle Scholar
Cheeger, J.. Differentiability of Lipschitz functions on metric measure spaces. Geom. Funct. Anal. 9 (1999), 428517.10.1007/s000390050094CrossRefGoogle Scholar
Doob, J. L.. Stochastic processes. (Wiley Classics Library. John Wiley & Sons, Inc, New York, 1990) Reprint of the 1953 original, A Wiley-Interscience Publication.Google Scholar
Eriksson-Bique, S.. Alternative proof of Keith-Zhong self-improvement and connectivity. Ann. Acad. Sci. Fenn. Math. 44 (2019), 407425.10.5186/aasfm.2019.4424CrossRefGoogle Scholar
Eriksson-Bique, S.. Characterizing spaces satisfying Poincaré inequalities and applications to differentiability. Geom. Funct. Anal. 29 (2019), 119189.10.1007/s00039-019-00479-3CrossRefGoogle Scholar
Eriksson-Bique, S.. Density of Lipschitz functions in energy. Calc. Var. Partial Differential Equations. 62 (2023), 23.10.1007/s00526-022-02395-1CrossRefGoogle Scholar
Eriksson-Bique, S. and Poggi-Corradini, P.. On the sharp lower bound for duality of modulus. Proc. Amer. Math. Soc. 150 (2022), 29552968.10.1090/proc/15951CrossRefGoogle Scholar
Eriksson-Bique, S. and Poggi-Corradini, P.. Density of continuous functions in Sobolev spaces with applications to capacity. Trans. Amer. Math. Soc. Ser. B. 11 (2024), 901944.10.1090/btran/188CrossRefGoogle Scholar
Heinonen, J.. Lectures on Analysis on Metric spaces (Universitext). (Springer-Verlag, New York, 2001).10.1007/978-1-4613-0131-8CrossRefGoogle Scholar
Heinonen, J., Koskela, P., Shanmugalingam, N. and Tyson, J. T.. Sobolev Spaces on Metric Measure spaces (New Mathematical Monographs). Vol.27, (Cambridge University Press, Cambridge, 2015).10.1017/CBO9781316135914CrossRefGoogle Scholar
Jiang, R., Shanmugalingam, N., Yang, D. and Yuan, W.. Hajłasz gradients are upper gradients. J. Math. Anal. Appl. 422 (2015), 397407.10.1016/j.jmaa.2014.08.055CrossRefGoogle Scholar
Keith, S.. Modulus and the poincaré inequality on metric measure spaces. Mathematische Zeitschrift. 245 (2003), 255292.10.1007/s00209-003-0542-yCrossRefGoogle Scholar
Keith, S. and Rajala, K.. A remark on Poincaré inequalities on metric measure spaces. Math. Scand. 95 (2004), 299304.10.7146/math.scand.a-14461CrossRefGoogle Scholar
Koskela, P.. Removable sets for Sobolev spaces. Ark. Mat. 37 (1999), 291304.10.1007/BF02412216CrossRefGoogle Scholar
Koskela, P. and MacManus, P.. Quasiconformal mappings and Sobolev spaces. Studia Math. 131 (1998), 117.Google Scholar
Koskela, P., Shanmugalingam, N. and Tuominen, H.. Removable sets for the Poincaré inequality on metric spaces. Indiana Univ. Math. J. 49 (2000), 333352.10.1512/iumj.2000.49.1719CrossRefGoogle Scholar
Lahti, P.. Capacitary density and removable sets for Newton-Sobolev functions in metric spaces. Calc. Var. Partial Differential Equations. 62 (2023), 20.10.1007/s00526-023-02494-7CrossRefGoogle Scholar
Lučić, D., and Pasqualetto, E.. Yet another proof of the density in energy of lipschitz functions. Manuscripta Mathematica. 175 (2024), 421438.10.1007/s00229-024-01562-2CrossRefGoogle Scholar
Shanmugalingam, N.. Newtonian spaces: an extension of Sobolev spaces to metric measure spaces. Rev. Mat. Iberoamericana. 16 (2000), 243279.10.4171/rmi/275CrossRefGoogle Scholar
Williams, M.. Geometric and analytic quasiconformality in metric measure spaces. Proc. Amer. Math. Soc. 140 (2012), 12511266.10.1090/S0002-9939-2011-11035-9CrossRefGoogle Scholar
Figure 0

Figure 1. The picture shows the definition of ${\sf c}_j^s$ and ${\sf c}_j^e$ in three different situations that cover all possible cases. On the left, $\alpha({\sf c}_j) \notin B_{2R-\frac{1}{j}}(x_0)$, so ${\sf c}_j^s = \emptyset$ and ${\sf c}_j^e \neq {\sf c}_j$. In the middle, $\alpha({\sf c}_j) \in B_{2R-\frac{1}{j}}(x_0)$ and ${\sf c}_j$ is contained in $B_{2R-\frac{1}{j}}(x_0)$, so ${\sf c}_j = {\sf c}_j^s = {\sf c}_j^e$. On the right, $\alpha({\sf c}_j) \in B_{2R-\frac{1}{j}}(x_0)$, but ${\sf c}_j \cap ({\rm X} \setminus B_{2R - \frac{1}{j}}(x_0)) \neq \emptyset$, so ${\sf c}_j^s \neq \emptyset$, ${\sf c}_j^s \neq {\sf c}_j$ and ${\sf c}_j^e \neq {\sf c}_j$.