1 The blowup-polynomial of a metric space and its distance matrix
This work aims to provide novel connections between metric geometry, the geometry of (real) polynomials, and algebraic combinatorics via partially symmetric functions. In particular, we introduce and study a polynomial graph-invariant for each graph, which to the best of our knowledge, is novel.
1.1 Motivations
The original motivation for our paper came from the study of distance matrices
$D_G$
of graphs G – on both the algebraic and spectral sides:
-
• On the algebraic side, Graham and Pollak [Reference Graham and Pollak21] initiated the study of
$D_G$ by proving: if
$T_k$ is a tree on k nodes, then
$\det D_{T_k}$ is independent of the tree structure and depends only on k. By now, many variants of such results are proved, for trees as well as several other families of graphs, including with q-analogs, weightings, and combinations of both of these. (See, e.g., [Reference Choudhury and Khare16] and its references for a list of such papers, results, and their common unification.)
-
• Following the above work [Reference Graham and Pollak21], Graham also worked on the spectral side, and with Lovász, studied in [Reference Graham and Lovász20] the distance matrix of a tree, including computing its inverse and characteristic polynomial. This has since led to the intensive study of the roots, i.e., the “distance spectrum,” for trees and other graphs. See, e.g., the survey [Reference Aouchiche and Hansen3] for more on distance spectra.
A well-studied problem in spectral graph theory involves understanding which graphs are distance co-spectral – i.e., for which graphs
$H' \not \cong K'$
, if any, do
$D_{H'}, D_{K'}$
have the same spectra. Many such examples exist; see, e.g., the references in [Reference Drury and Lin18]. In particular, the characteristic polynomial of
$D_G$
does not “detect” the graph G. It is thus natural to seek some other byproduct of
$D_G$
which does – i.e., which recovers G up to isometry. In this paper, we find such a (to the best of our knowledge) novel graph invariant: a multivariate polynomial, which we call the blowup-polynomial of G, and which does detect G. Remarkably, this polynomial turns out to have several additional attractive properties:
-
• It is multi-affine in its arguments.
-
• It is also real-stable, so that its “support” yields a hitherto unexplored delta-matroid.
-
• The blowup-polynomial simultaneously encodes the determinants of all graph-blowups of G (defined presently), thereby connecting with the algebraic side (see the next paragraph).
-
• Its “univariate specialization” is a transformation of the characteristic polynomial of
$D_G$ , thereby connecting with the spectral side as well.
Thus, the blowup-polynomial that we introduce, connects distance spectra for graphs – and more generally, for finite metric spaces – to other well-studied objects, including real-stable/Lorentzian polynomials and delta-matroids.
On the algebraic side, a natural question involves asking if there are graph families
$\{ G_i : i \in I \}$
(like trees on k vertices) for which the scalars
$\det (D_{G_i})$
behave “nicely” as a function of
$i \in I$
. As stated above, the family of blowups of a fixed graph G (which help answer the preceding “spectral” question) not only answer this question positively as well, but the nature of the answer – multi-affine polynomiality – is desirable in conjunction with its real-stability. In fact, we will obtain many of these results, both spectral and algebraic, in greater generality: for arbitrary finite metric spaces.
The key construction required for all of these contributions is that of a blowup, and we begin by defining it more generally, for arbitrary metric spaces that are discrete (i.e., every point is isolated).
Definition 1.1 Given a metric space
$(X,d)$
with all isolated points, and a function
$\mathbf {n} : X \to \mathbb {Z}_{>0}$
, the
$\mathbf {n}$
-blowup of X is the metric space
$X[\mathbf {n}]$
obtained by creating
$n_x := \mathbf {n}(x)$
copies of each point x (also termed blowups of x). Define the distance between copies of distinct points
$x \neq y$
in X to still be
$d(x,y)$
, and between distinct copies of the same point to be
$2 d(x,X \setminus \{x\}) = 2 \inf _{y \in X \setminus \{ x \}} d(x,y)$
.
Also define the distance matrix
$D_X$
and the modified distance matrix
$\mathcal {D}_X$
of X via:

Notice, for completeness, that the above construction applied to a non-discrete metric space does not yield a metric; and that blowups of X are “compatible” with isometries of X (see (1.3)). We also remark that this notion of blowup seems to be relatively less studied in the literature, and differs from several other variants in the literature – for metric spaces, e.g., [Reference Cheeger, Kleiner and Schioppa14] or for graphs, e.g., [Reference Liu29]. However, the variant studied in this paper was previously studied for the special case of unweighted graphs, see, e.g,. [Reference Hatami, Hirst and Norine24–Reference Komlós, Sárközy and Szemerédi26] in extremal and probabilistic graph theory.
1.2 Defining the blowup-polynomial; Euclidean embeddings
We now describe some of the results in this work, beginning with metric embeddings. Recall that the complete information about a (finite) metric space is encoded into its distance matrix
$D_X$
(or equivalently, in the off-diagonal part of
$\mathcal {D}_X$
). Metric spaces are useful in many sub-disciplines of the mathematical sciences, and have been studied for over a century. For instance, a well-studied question in metric geometry involves understanding metric embeddings. In 1910, Fréchet showed [Reference Fréchet19] that every finite metric space with
$k+1$
points isometrically embeds into
$\mathbb R^k$
with the supnorm. Similarly, a celebrated 1935 theorem of Schoenberg [Reference Schoenberg37] (following Menger’s works [Reference Menger32, Reference Menger33]) says the following.
Theorem 1.2 (Schoenberg [Reference Schoenberg37])
A finite metric space
$X = \{ x_0, \dots , x_k \}$
isometrically embeds inside Euclidean space
$(\mathbb R^r, \| \cdot \|_2)$
if and only if its modified Cayley–Menger matrix

is positive semidefinite, with rank at most r.
As an aside, the determinant of this matrix is related to the volume of a polytope with vertices
$x_i$
(beginning with classical work of Cayley [Reference Cayley13]), and the Cayley–Menger matrix itself connects to the principle of trilateration/triangulation that underlies the GPS system.
Returning to the present work, our goal is to study the distance matrix of a finite metric space vis-a-vis its blowups. We begin with a “negative” result from metric geometry. Note that every blowup of a finite metric space embeds into
$\mathbb R^k$
(for some k) equipped with the supnorm, by Fréchet’s aforementioned result. In contrast, we employ Schoenberg’s Theorem 1.2 to show that the same is far from true when considering the Euclidean metric. Namely, given a finite metric space X, we characterize all blowups
$X[\mathbf {n}]$
that embed in some Euclidean space
$(\mathbb R^k, \| \cdot \|_2)$
. Since X embeds into
$X[\mathbf {n}]$
, a necessary condition is that X itself should be Euclidean. With this in mind, we have the following.
Theorem A Suppose
$X = \{ x_1, \dots , x_k \}$
is a finite metric subspace of Euclidean space
$(\mathbb R^r, \| \cdot \|_2)$
. Given positive integers
$\{ n_{x_i} : 1 \leqslant i \leqslant k \}$
, not all of which equal
$1$
, the following are equivalent:
-
(1) The blowup
$X[\mathbf {n}]$ isometrically embeds into some Euclidean space
$(\mathbb R^{r'}, \| \cdot \|_2)$ .
-
(2) Either
$k=1$ and
$\mathbf {n}$ is arbitrary (then, by convention,
$X[\mathbf {n}]$ is a simplex); or
$k>1$ and there exists a unique
$1 \leqslant j \leqslant k$ such that
$n_{x_j} = 2$ . In this case, we moreover have: (a)
$n_{x_i} = 1\ \forall i \neq j$ , (b)
$x_j$ is not in the affine hull/span V of
$\{ x_i : i \neq j \}$ , and (c) the unique point
$v \in V$ closest to
$x_j$ , lies in X.
If these conditions hold, one can take
$r' = r$
and
$X[\mathbf {n}] = X \sqcup \{ 2v - x_j \}$
.
Given the preceding result, we turn away from metric geometry, and instead focus on studying the family of blowups
$X[\mathbf {n}]$
– through their distance matrices
$D_{X[\mathbf {n}]}$
(which contain all of the information on
$X[\mathbf {n}]$
). Drawing inspiration from Graham and Pollak [Reference Graham and Pollak21], we focus on one of the simplest invariants of this family of matrices: their determinants, and the (possibly algebraic) nature of the dependence of
$\det D_{X[\mathbf {n}]}$
on
$\mathbf {n}$
. In this paper, we show that the function
$: \mathbf {n} \mapsto \det D_{X[\mathbf {n}]}$
possesses several attractive properties. First,
$\det D_{X[\mathbf {n}]}$
is a polynomial function in the sizes
$n_x$
of the blowup, up to an exponential factor.
Theorem B Given
$(X,d)$
a finite metric space, and a tuple of positive integers
$\mathbf {n} := (n_x)_{x \in X} \in \mathbb {Z}_{>0}^X$
, the function
$\mathbf {n} \mapsto \det D_{X[\mathbf {n}]}$
is a multi-affine polynomial
$p_X(\mathbf {n})$
in the
$n_x$
(i.e., its monomials are squarefree in the
$n_x$
), times the exponential function

Moreover, the polynomial
$p_X(\mathbf {n})$
has constant term
$p_X(\mathbf {0}) = \prod _{x \in X} (-2 \; d(x, X \setminus \{ x \}))$
, and linear term
$-p_X(\mathbf {0}) \sum _{x \in X} n_x$
.
Theorem B follows from a stronger one proved below. See Theorem 2.3, which shows, in particular, that not only do the conclusions of Theorem B hold over an arbitrary commutative ring, but moreover, the blowup-polynomial
$p_X(\mathbf {n})$
is a polynomial function in the variables
$\mathbf {n} = \{ n_x : x \in X \}$
as well as the entries of the “original” distance matrix
$D_X$
– and moreover, it is squarefree/multi-affine in all of these arguments (where we treat all entries of
$D_X$
to be “independent” variables).
We also refine the final assertions of Theorem B, by isolating in Proposition 2.6, the coefficient of every monomial in
$p_X(\mathbf {n})$
. That proposition moreover provides a sufficient condition under which the coefficients of two monomials in
$p_X(\mathbf {n})$
are equal.
Theorem B leads us to introduce the following notion, for an arbitrary finite metric space (e.g., every finite, connected,
$\mathbb {R}_{>0}$
-weighted graph).
Definition 1.3 Define the (multivariate) blowup-polynomial of a finite metric space
$(X,d)$
to be
$p_X(\mathbf {n})$
, where the
$n_x$
are thought of as indeterminates. We write out a closed-form expression in the proof of Theorem B – see equation (2.2).
In this paper, we also study a specialization of this polynomial. Define the univariate blowup-polynomial of
$(X,d)$
to be
$u_X(n) := p_X(n,n,\dots ,n)$
, where n is thought of as an indeterminate.
Remark 1.4 Definition 1.3 requires a small clarification. The polynomial map (by Theorem B)

can be extended from the Zariski dense subset
$\mathbb {Z}_{>0}^k$
to all of
$\mathbb R^k$
. (Zariski density is explained during the proof of Theorem B.) Since
$\mathbb R$
is an infinite field, this polynomial map on
$\mathbb R^k$
may now be identified with a polynomial, which is precisely
$p_X(-)$
, a polynomial in
$|X|$
variables (which we will denote by
$\{ n_x : x \in X \}$
throughout the paper, via a mild abuse of notation). Now setting all arguments to be the same indeterminate yields the univariate blowup-polynomial of
$(X,d)$
.
1.3 Real-stability
We next discuss the blowup-polynomial
$p_X(\cdot )$
and its univariate specialization
$u_X(\cdot )$
from the viewpoint of root-location properties. As we will see, the polynomial
$u_X(n) = p_X(n,n,\dots ,n)$
always turns out to be real-rooted in n. In fact, even more is true. Recall that in recent times, the notion of real-rootedness has been studied in a much more powerful avatar: real-stability. Our next result strengthens the real-rootedness of
$u_X(\cdot )$
to the second attractive property of
$p_X(\cdot )$
– namely, real-stability.
Theorem C The blowup-polynomial
$p_X(\mathbf {n})$
of every finite metric space
$(X,d)$
is real-stable in
$\{ n_x \}$
. (Hence, its univariate specialization
$u_X(n) = p_X(n,n,\dots ,n)$
is always real-rooted.)
Recall that real-stable polynomials are simply ones with real coefficients, which do not vanish when all arguments are constrained to lie in the (open) upper half-plane
$\Im (z)> 0$
. Such polynomials have been intensively studied in recent years, with a vast number of applications. For instance, they were famously used in celebrated works of Borcea–Brändén (e.g., [Reference Borcea and Brändén5–Reference Borcea and Brändén7]) and Marcus–Spielman–Srivastava [Reference Marcus, Spielman and Srivastava30, Reference Marcus, Spielman and Srivastava31] to prove longstanding conjectures (including of Kadison–Singer, Johnson, Bilu–Linial, Lubotzky, and others), construct expander graphs, and vastly extend the Laguerre–Pólya–Schur program [Reference Laguerre27, Reference Pólya34, Reference Pólya and Schur35] from the turn of the 20th century (among other applications).
Theorem C reveals that for all finite metric spaces – in particular, for all finite connected graphs – the blowup-polynomial is indeed multi-affine and real-stable. The class of multi-affine real-stable polynomials has been characterized in [Reference Brändén11, Theorem 5.6] and [Reference Wagner and Wei43, Theorem 3]. (For a connection to matroids, see [Reference Brändén11, Reference Choe, Oxley, Sokal and Wagner15].) To the best of our knowledge, blowup-polynomials
$p_X(\mathbf {n})$
provide novel examples/realizations of multi-affine real-stable polynomials.
1.4 Graph metric spaces: symmetries, complete multipartite graphs
We now turn from the metric-geometric Theorem A, the algebraic Theorem B, and the analysis-themed Theorem C, to a more combinatorial theme, by restricting from metric spaces to graphs. Here, we present two “main theorems” and one proposition.
1.4.1 Graph invariants and symmetries
Having shown that
$\det D_{X[\mathbf {n}]}$
is a polynomial in
$\mathbf {n}$
(times an exponential factor), and that
$p_X(\cdot )$
is always real-stable, our next result explains a third attractive property of
$p_X(\cdot )$
: The blowup-polynomial of a graph
$X = G$
is indeed a (novel) graph invariant. To formally state this result, we begin by re-examining the blowup-construction for graphs and their distance matrices.
A distinguished sub-class of discrete metric spaces is that of finite simple connected unweighted graphs G (so, without parallel/multiple edges or self-loops). Here, the distance between two nodes
$v,w$
is defined to be the (edge-)length of any shortest path joining
$v,w$
. In this paper, we term such objects graph metric spaces. Note that the blowup
$G[\mathbf {n}]$
is a priori only defined as a metric space; we now adjust the definition to make it a graph.
Definition 1.5 Given a graph metric space
$G = (V,E)$
, and a tuple
$\mathbf {n} = (n_v : v \in V)$
, the
$\mathbf {n}$
-blowup of G is defined to be the graph
$G[\mathbf {n}]$
– with
$n_v$
copies of each vertex v – such that a copy of v and one of w are adjacent in
$G[\mathbf {n}]$
if and only if
$v \neq w$
are adjacent in G.
(For example, the
$\mathbf {n}$
-blowup of a complete graph is a complete multipartite graph.) Now note that if G is a graph metric space, then so is
$G[\mathbf {n}]$
for all tuples
$\mathbf {n} \in \mathbb {Z}_{>0}^{|V|}$
. The results stated above thus apply to every such graph G – more precisely, to the distance matrices of the blowups of G.
To motivate our next result, now specifically for graph metric spaces, we first relate the symmetries of the graph with those of its blowup-polynomial
$p_G(\mathbf {n})$
. Suppose a graph metric space
$G = (V,E)$
has a structural (i.e., adjacency-preserving) symmetry
$\Psi : V \to V$
– i.e., an (auto-)isometry as a metric space. Denoting the corresponding relabeled graph metric space by
$\Psi (G)$
,

It is thus natural to ask if the converse holds – i.e., if
$p_G(\cdot )$
helps recover the group of auto-isometries of G. A stronger result would be if
$p_G$
recovers G itself (up to isometry). We show that both of these hold.
Theorem D Given a graph metric space
$G = (V,E)$
and a bijection
$\Psi : V \to V$
, the symmetries of the polynomial
$p_G$
equal the isometries of G. In particular, any (equivalently all) of the statements in (1.3) hold, if and only if
$\Psi $
is an isometry of G. More strongly, the polynomial
$p_G(\mathbf {n})$
recovers the graph metric space G (up to isometry). However, this does not hold for the polynomial
$u_G$
.
As the proof reveals, one in fact needs only the homogeneous quadratic part of
$p_G$
, i.e., its Hessian matrix
$((\partial _{n_v} \partial _{n_{v'}} p_G)(\mathbf {0}_V))_{v,v' \in V}$
, to recover the graph and its isometries. Moreover, this associates to every graph a partially symmetric polynomial, whose symmetries are precisely the graph-isometries.
Our next result works more generally in metric spaces X, hence is stated over them. Note that the polynomial
$p_X(\mathbf {n})$
is “partially symmetric,” depending on the symmetries (or isometries) of the distance matrix (or metric space). Indeed, partial symmetry is as much as one can hope for, because it turns out that “full” symmetry (in all variables
$n_x$
) occurs precisely in one situation.
Proposition 1.6 Given a finite metric space X, the following are equivalent:
-
(1) The polynomial
$p_X(\mathbf {n})$ is symmetric in the variables
$\{ n_x, \ x \in X \}$ .
-
(2) The metric
$d_X$ is a rescaled discrete metric:
$d_X(x,y) = c \mathbf {1}_{x \neq y}\ \forall x,y \in X$ , for some
$c>0$ .
1.4.2 Complete multipartite graphs: novel characterization via stability
The remainder of this section returns back to graphs. We next present an interesting byproduct of the above results: a novel characterization of the class of complete multipartite graphs. Begin by observing from the proof of Theorem C that the polynomials
$p_G(\cdot )$
are stable because of a determinantal representation (followed by inversion). However, they do not enjoy two related properties:
-
(1)
$p_G(\cdot )$ is not homogeneous.
-
(2) The coefficients of the multi-affine polynomial
$p_G(\cdot )$ are not all of the same sign; in particular, they cannot form a probability distribution on the subsets of
$\{ 1, \dots , k \}$ (corresponding to the various monomials in
$p_G(\cdot )$ ). In fact, even the constant and linear terms have opposite signs, by the final assertion in Theorem B.
These two (unavailable) properties of real-stable polynomials are indeed important and well-studied in the literature. Corresponding to the preceding numbering:
-
(1) Very recently, Brändén and Huh [Reference Brändén and Huh12] introduced and studied a distinguished class of homogeneous real polynomials, which they termed Lorentzian polynomials (defined below). Relatedly, Gurvits [Reference Gurvits, Kotsireas and Zima23] / Anari–Oveis Gharan–Vinzant [Reference Anari, Oveis Gharan and Vinzant2] defined strongly/completely log-concave polynomials, also defined below. These classes of polynomials have several interesting properties as well as applications (see, e.g., [Reference Anari, Oveis Gharan and Vinzant1, Reference Anari, Oveis Gharan and Vinzant2, Reference Brändén and Huh12, Reference Gurvits, Kotsireas and Zima23] and related/follow-up works).
-
(2) Recall that strongly Rayleigh measures are probability measures on the power set of
$\{ 1, \dots , k \}$ whose generating (multi-affine) polynomials are real-stable. These were introduced and studied by Borcea, Brändén, and Liggett in the fundamental work [Reference Borcea, Brändén and Liggett8]. This work developed the theory of negative association/dependence for such measures, and enabled the authors to prove several conjectures of Liggett, Pemantle, and Wagner, among other achievements.
Given that
$p_G(\cdot )$
is always real-stable, a natural question is if one can characterize those graphs for which a certain homogenization of
$p_G(\cdot )$
is Lorentzian, or a suitable normalization is strongly Rayleigh. The standard mathematical way to address obstacle (1) above is to “projectivize” using a new variable
$z_0$
, while for obstacle (2) we evaluate at
$(-z_1, \dots , -z_k)$
, where we use
$z_j$
instead of
$n_{x_j}$
to denote complex variables. Thus, our next result proceeds via homogenization at
$-z_0$
.
Theorem E Say
$G = (V,E)$
with
$|V|=k$
. Define the homogenized blowup-polynomial

Then the following are equivalent:
-
(1) The polynomial
$\widetilde {p}_G(z_0, z_1, \dots , z_k)$ is real-stable.
-
(2) The polynomial
$\widetilde {p}_G(\cdot )$ has all coefficients (of the monomials
$z_0^{k - |J|} \prod _{j \in J} z_j$ ) nonnegative.
-
(3) We have
$(-1)^k p_G(-1,\dots ,-1)> 0$ , and the normalized “reflected” polynomial
$$\begin{align*}(z_1, \dots, z_k) \quad \mapsto \quad \frac{p_G(-z_1, \dots, -z_k)}{p_G(-1,\dots,-1)} \end{align*}$$
$\prod _{j \in J} z_j$ ), which sum up to
$1$ .
-
(4) The modified distance matrix
$\mathcal {D}_G$ (see Definition 1.1) is positive semidefinite.
-
(5) G is a complete multipartite graph.
Theorem E is a novel characterization result of complete multipartite graphs in the literature, in terms of real stability and the strong(ly) Rayleigh property. Moreover, given the remarks preceding Theorem E, we present three further equivalences to these characterizations.
Corollary 1.7 Definitions in Section 4.2. The assertions in Theorem E are further equivalent to:
-
(6) The polynomial
$\widetilde {p}_G(z_0, \dots , z_k)$ is Lorentzian.
-
(7) The polynomial
$\widetilde {p}_G(z_0, \dots , z_k)$ is strongly log-concave.
-
(8) The polynomial
$\widetilde {p}_G(z_0, \dots , z_k)$ is completely log-concave.
We quickly explain the corollary. Theorem E(1) implies
$\widetilde {p}_G$
is Lorentzian (see [Reference Brändén and Huh12, Reference Choe, Oxley, Sokal and Wagner15]), which implies Theorem E(2). The other equivalences follow from [Reference Brändén and Huh12, Theorem 2.30], which shows that – for any real homogeneous polynomial – assertions (7), (8) here are equivalent to
$\widetilde {p}_G$
being Lorentzian.
Remark 1.8 As we see in the proof of Theorem E, when
$\mathcal {D}_G$
is positive semidefinite, the homogeneous polynomial
$\widetilde {p}_G(z_0, \dots , z_k)$
has a determinantal representation, i.e.,

with all
$A_j$
positive semidefinite and
$c \in \mathbb R$
. In Proposition A.2, we further compute the mixed characteristic polynomial of these matrices
$A_j$
(see (A.1) for the definition), and show that up to a scalar, it equals the “inversion” of the univariate blowup-polynomial, i.e.,
$z_0^k u_G(z_0^{-1})$
.
Remark 1.9 We also show that the univariate polynomial
$u_G(x)$
is intimately related to the characteristic polynomial of
$D_G$
(i.e., the “distance spectrum” of G), whose study was one of our original motivations. See Proposition 4.2 and the subsequent discussion, for precise statements.
1.5 Two novel delta-matroids
We conclude with a related byproduct: two novel constructions of delta-matroids, one for every finite metric space and the other for each tree graph. Recall that a delta-matroid consists of a finite “ground set” E and a nonempty collection of feasible subsets
$\mathcal {F} \subseteq 2^E$
, satisfying
$\bigcup _{F \in \mathcal {F}} F = E$
as well as the symmetric exchange axiom: Given
$A,B \in \mathcal {F}$
and
$x \in A \Delta B$
(their symmetric difference), there exists
$y \in A \Delta B$
such that
$A \Delta \{ x, y \} \in \mathcal {F}$
. Delta-matroids were introduced by Bouchet in [Reference Bouchet9] as a generalization of the notion of matroids.
Each (skew-)symmetric matrix
$A_{k \times k}$
over a field yields a linear delta-matroid
$\mathcal {M}_A$
as follows. Given any matrix
$A_{k \times k}$
, let
$E := \{ 1, \dots , k \}$
and let a subset
$F \subseteq E$
belong to
$\mathcal {M}_A$
if either F is empty or the principal submatrix
$A_{F \times F}$
is nonsingular. In [Reference Bouchet10], Bouchet showed that if A is (skew-)symmetric, then the set system
$\mathcal {M}_A$
is indeed a delta-matroid, which is said to be linear.
We now return to the blowup-polynomial. First, recall a 2007 result of Brändén [Reference Brändén11]: given a multi-affine real-stable polynomial, the set of monomials with nonzero coefficients forms a delta-matroid. Thus, from
$p_X(\mathbf {n}),$
we obtain a delta-matroid, which as we will explain is linear.
Corollary 1.10 Given a finite metric space
$(X,d)$
, the set of monomials with nonzero coefficients in
$p_X(\mathbf {n})$
forms the linear delta-matroid
$\mathcal {M}_{\mathcal {D}_X}$
.
Definition 1.11 We term
$\mathcal {M}_{\mathcal {D}_X}$
the blowup delta-matroid of
$(X,d)$
.
The blowup delta-matroid
$\mathcal {M}_{\mathcal {D}_X}$
is – even for X a finite connected unweighted graph – a novel construction that arises out of metric geometry rather than combinatorics, and one that seems to be unexplored in the literature (and unknown to experts). Of course, it is a simple, direct consequence of Brändén’s result in [Reference Brändén11]. However, the next delta-matroid is less direct to show.
Theorem F Suppose
$T = (V,E)$
is a finite connected unweighted tree with
$|V| \geqslant 2$
. Define the set system
$\mathcal {M}'(T)$
to comprise all subsets
$I \subseteq V$
, except for the ones that contain two vertices
$v_1 \neq v_2$
in I such that the Steiner tree
$T(I)$
has
$v_1, v_2$
as leaves with a common neighbor. Then
$\mathcal {M}'(T)$
is a delta-matroid, which does not equal
$\mathcal {M}_{D_T}$
for every path graph
$T = P_k$
,
$k \geqslant 9$
.
We further prove, this notion of (in)feasible subsets in
$\mathcal {M}'(T)$
does not generalize to all graphs. Thus,
$\mathcal {M}'(T)$
is a combinatorial (not matrix-theoretic) delta-matroid that is also unstudied in the literature to the best of our knowledge, and which arises from every tree, but interestingly, not from all graphs.
As a closing statement here: in addition to further exploring the real-stable polynomials
$p_G(\mathbf {n})$
, it would be interesting to obtain connections between these delta-matroids
$\mathcal {M}_{\mathcal {D}_G}$
and
$\mathcal {M}'(T)$
, and others known in the literature from combinatorics, polynomial geometry, and algebra.
1.6 Organization of the paper
The remainder of the paper is devoted to proving the above Theorems A through F; this will require developing several preliminaries along the way. The paper is clustered by theme; thus, the next two sections and the final one respectively involve, primarily:
-
• (commutative) algebraic methods – to prove the polynomiality of
$p_X(\cdot )$ (Theorem B), and to characterize those X for which it is a symmetric polynomial (Proposition 1.6);
-
• methods from real-stability and analysis – to show
$p_X(\cdot )$ is real-stable (Theorem C);
-
• metric geometry – to characterize for a given Euclidean finite metric space X, all blowups that remain Euclidean (Theorem A), and to write down a related “tropical” version of Schoenberg’s Euclidean embedding theorem from [Reference Schoenberg37].
In the remaining Section 4, we prove Theorems D–F. In greater detail: we focus on the special case of
$X = G$
a finite simple connected unweighted graph, with the minimum edge-distance metric. After equating the isometries of G with the symmetries of
$p_G(\mathbf {n})$
, and recovering G from
$p_G(\mathbf {n})$
, we prove the aforementioned characterization of complete multipartite graphs G in terms of
$\widetilde {p}_G$
being real-stable, or
$p_G(-\mathbf {n}) / p_G(-1, \dots , -1)$
being strongly Rayleigh. Next, we discuss a family of blowup-polynomials from this viewpoint of “partial” symmetry. We also connect
$u_G(x)$
to the characteristic polynomial of
$D_G$
, hence to the distance spectrum of G. Finally, we introduce the delta-matroid
$\mathcal {M}'(T)$
for every tree, and explore its relation to the blowup delta-matroid
$\mathcal {M}_{\mathcal {D}_T}$
(for T a path), as well as extensions to general graphs. We end with Appendices A and B that contain supplementary details and results.
We conclude this section on a philosophical note. Our approach in this work adheres to the maxim that the multivariate polynomial is a natural, general, and more powerful object than its univariate specialization. This is of course famously manifested in the recent explosion of activity in the geometry of polynomials, via the study of real-stable polynomials by Borcea–Brändén and other researchers; but also shows up in several other settings – we refer the reader to the survey [Reference Sokal and Webb40] by Sokal for additional instances. (E.g., a specific occurrence is in the extreme simplicity of the proof of the multivariate Brown–Colbourn conjecture [Reference Royle and Sokal36, Reference Sokal39], as opposed to the involved proof in the univariate case [Reference Wagner42].)
2 Algebraic results: the blowup-polynomial and its full symmetry
We begin this section by proving Theorem B in “full” algebraic (and greater mathematical) generality, over an arbitrary unital commutative ring R. We require the following notation.
Definition 2.1 Fix positive integers
$k, n_1, \dots , n_k> 0$
, and vectors
$\mathbf {p}_i, \mathbf {q}_i \in R^{n_i}$
for all
$1 \leqslant i \leqslant k$
.
-
(1) For these parameters, define the blowup-monoid to be the collection
$\mathcal {M}_{\mathbf {n}}(R) := R^k \times R^{k \times k}$ . We write a typical element as a pair
$(\mathbf {a}, D)$ , where in coordinates,
$\mathbf {a} = (a_i)^T$ and
$D = (d_{ij})$ .
-
(2) Given
$(\mathbf {a}, D) \in \mathcal {M}_{\mathbf {n}}(R)$ , define
$M(\mathbf {a},D)$ to be the square matrix of dimension
$n_1 + \cdots + n_k$ with
$k^2$ blocks, whose
$(i,j)$ -block for
$1 \leqslant i,j \leqslant k$ is
$\delta _{i,j} a_i \operatorname {\mathrm {Id}}_{n_i} + d_{ij} \mathbf {p}_i \mathbf {q}_j^T$ . Also define
$\Delta _{\mathbf {a}} \in R^{k \times k}$ to be the diagonal matrix with
$(i,i)$ entry
$a_i$ , and
$$\begin{align*}N(\mathbf{a},D) := \Delta_{\mathbf{a}} + \operatorname{\mathrm{diag}}(\mathbf{q}_1^T \mathbf{p}_1, \dots, \mathbf{q}_k^T \mathbf{p}_k) \cdot D \ \in R^{k \times k}. \end{align*}$$
-
(3) Given
$\mathbf {a}, \mathbf {a}' \in R^k$ , define
$\mathbf {a} \circ \mathbf {a}' := (a_1 a^{\prime }_1, \dots , a_k a^{\prime }_k)^T \in R^k$ .
The set
$\mathcal {M}_{\mathbf {n}}(R)$
is of course a group under addition, but we are interested in the following non-standard monoid structure on it.
Lemma 2.2 The set
$\mathcal {M}_{\mathbf {n}}(R)$
is a monoid under the product

and with identity element
$((1,\dots ,1)^T, 0_{k \times k})$
.
With this notation in place, we now present the “general” formulation of Theorem B.
Theorem 2.3 Fix integers
$k, n_1, \dots , n_k$
and vectors
$\mathbf {p}_i, \mathbf {q}_i$
as above. Let
$K := n_1 + \cdots + n_k$
.
-
(1) The following map is a morphism of monoids:
$$\begin{align*}\Psi : (\mathcal{M}_{\mathbf{n}}(R), \circ) \to (R^{K \times K}, \cdot), \qquad (\mathbf{a},D) \mapsto M(\mathbf{a},D). \end{align*}$$
-
(2) The determinant of
$M(\mathbf {a},D)$ equals
$\prod _i a_i^{n_i - 1}$ times a multi-affine polynomial in
$a_i, d_{ij}$ , and the entries
$\mathbf {q}_i^T \mathbf {p}_i$ . More precisely,
(2.1)$$ \begin{align} \det M(\mathbf{a},D) = \det N(\mathbf{a},D) \prod_{i=1}^k a_i^{n_i - 1}. \end{align} $$
-
(3) If all
$a_i \in R^\times $ and
$N(\mathbf {a},D)$ is invertible, then so is
$M(\mathbf {a},D)$ , and
$$\begin{align*}M(\mathbf{a},D)^{-1} = M((a_1^{-1}, \dots, a_k^{-1})^T, -\Delta_{\mathbf{a}}^{-1} D N(\mathbf{a},D)^{-1}). \end{align*}$$
Instead of using
$N(\mathbf {a},D)$
which involves “post-multiplication” by D, one can also use
$N(\mathbf {a},D^T)^T$
in the above results, to obtain similar formulas that we leave to the interested reader.
Proof The first assertion is easy, and it implies the third assertion via showing that
$M(\mathbf {a},D)^{-1} M(\mathbf {a},D) = \operatorname {\mathrm {Id}}_K$
. (We show these computations for completeness in the appendix.) Thus, it remains to prove the second assertion. To proceed, we employ Zariski density, as was done in, e.g., our previous work [Reference Choudhury and Khare16]. Namely, we begin by working over the field of rational functions in
$k + k^2 + 2K$
variables

where
$A_i, D_{ij}$
(with a slight abuse of notation), and
$Q_i^{(l)}, P_i^{(l)}$
– with
$1 \leqslant i,j \leqslant k$
and
$1 \leqslant l \leqslant n_i$
– serve as proxies for
$a_i, d_{ij}$
, and the coordinates of
$\mathbf {q}_i, \mathbf {p}_i$
, respectively. Over this field, we work with

and the matrix
$\mathbf {D} = (D_{ij})$
; note that
$\mathbf {D}$
has full rank
$r=k$
, since
$\det \mathbf {D}$
is a nonzero polynomial over
$\mathbb {Q}$
, hence is a unit in
$\mathbb {F}$
.
Let
$\mathbf {D} = \sum _{j=1}^r \mathbf {u}_j \mathbf {v}_j^T$
be any rank-one decomposition. For each
$1 \leqslant j \leqslant r$
, write
$\mathbf {u}_j = (u_{j1}, \dots , u_{jk})^T$
, and similarly for
$\mathbf {v}_j$
. Then
$D_{ij} = \sum _{s=1}^r u_{si} v_{sj}$
for all
$i,j$
. Now a Schur complement argument (with respect to the
$(2,2)$
block below) yields:

We next compute the determinant on the right alternately: by using the Schur complement with respect to the
$(1,1)$
block instead. This yields:

where
$M_{r \times r}$
has
$(i,j)$
entry
$\sum _{l=1}^k v_{il} \; (A_l^{-1} \mathbf {Q}_l^T \mathbf {P}_l) \; u_{jl}$
. But
$\det (\operatorname {\mathrm {Id}}_r + M)$
is also the determinant of

by taking the Schur complement with respect to its
$(1,1)$
block. Finally, take the Schur complement with respect to the
$(2,2)$
block of
$M'$
, to obtain

and this is indeed
$\prod _i A_i^{n_i - 1}$
times a multi-affine polynomial in the claimed variables.
The above reasoning proves the assertion (2.1) over the field

defined above. We now explain how Zariski density helps prove (2.1) over every unital commutative ring – with the key being that both sides of (2.1) are polynomials in the variables. Begin by observing that (2.1) actually holds over the polynomial (sub)ring

but the above proof used the invertibility of the polynomials
$A_1, \dots , A_k, \det (D_{ij})_{i,j=1}^k$
.
Now use that
$\mathbb {Q}$
is an infinite field; thus, the following result applies.
Proposition 2.4 The following are equivalent for a field
$\mathbb {F}$
.
-
(1) The polynomial ring
$\mathbb {F}[x_1, \dots , x_n]$ (for some
$n \geqslant 1$ ) equals the ring of polynomial functions from affine n-space
$\mathbb {A}_{\mathbb {F}}^n \cong \mathbb {F}^n$ to
$\mathbb {F}$ .
-
(2) The preceding statement holds for every
$n \geqslant 1$ .
-
(3)
$\mathbb {F}$ is infinite.
Moreover, the nonzero-locus
$\mathcal {L}$
of any nonzero polynomial in
$\mathbb {F}[x_1, \dots , x_n]$
with
$\mathbb {F}$
an infinite field, is Zariski dense in
$\mathbb {A}_{\mathbb {F}}^n$
. In other words, if a polynomial in n variables equals zero on
$\mathcal {L}$
, then it vanishes on all of
$\mathbb {A}_{\mathbb {F}}^n \cong \mathbb {F}^n$
.
Proof-sketch
Clearly
$(2) \implies (1)$
; and that the contrapositive of
$(1) \implies (3)$
holds follows from the fact that over a finite field
$\mathbb {F}_q$
, the nonzero polynomial
$x_1^q - x_1$
equals the zero function. The proof of
$(1) \implies (3)$
is by induction on
$n \geqslant 1$
, and is left to the reader (or see, e.g., standard textbooks, or even [Reference Choudhury and Khare16]) – as is the proof of the final assertion.
By the equivalence in Proposition 2.4, the above polynomial ring
$R_0$
equals the ring of polynomial functions in the same number of variables, so (2.1) now holds over the ring of polynomial functions in the above
$k + k^2 + 2K$
variables – but only on the nonzero-locus of the polynomial
$(\det \mathbf {D}) \prod _i A_i$
, since we used
$A_i^{-1}$
and the invertibility of
$\mathbf {D}$
in the above proof.
Now for the final touch: as
$(\det \mathbf {D}) \prod _i A_i$
is a nonzero polynomial, its nonzero-locus is Zariski dense in affine space
$\mathbb {A}_{\mathbb {Q}}^{k + k^2 + 2K}$
(by Proposition 2.4). Since the difference of the polynomials in (2.1) (this is where we use that
$\det (\cdot )$
is a polynomial!) vanishes on the above nonzero-locus, it does so for all values of
$A_i$
and the other variables. Therefore, (2.1) holds in the ring
$R^{\prime }_0$
of polynomial functions with coefficients in
$\mathbb {Q}$
, hence upon restricting to the polynomial subring of
$R^{\prime }_0$
with integer (not just rational) coefficients – since the polynomials on both sides of (2.1) have integer coefficients. Finally, the proof is completed by specializing the variables
$A_i$
to specific scalars
$a_i$
in an arbitrary unital commutative ring R, and similarly for the other variables.
Theorem 2.3, when specialized to
$p_i^{(l)} = q_i^{(l)} = 1$
for all
$1 \leqslant i \leqslant k$
and
$1 \leqslant l \leqslant n_i$
, reveals how to convert the sizes
$n_{x_i}$
in the blowup-matrix
$D_{X[\mathbf {n}]}$
into entries of the related matrix
$N(\mathbf {a},D)$
. This helps prove a result in the introduction – that
$\det D_{X[\mathbf {n}]}$
is a polynomial in
$\mathbf {n}$
.
Proof of Theorem B
Everything but the final sentence follows from Theorem 2.3, specialized to

(A word of caution:
$d_{ii} \neq d(x_i, x_i)$
, and hence
$\mathcal {D}_X \neq D_X$
: they differ by a diagonal matrix.)
In particular,
$p_X(\mathbf {n})$
is a multi-affine polynomial in
$\mathbf {q}_i^T \mathbf {p}_i = n_i$
. We also write out the blowup-polynomial, useful here and below:

Now the constant term is obtained by evaluating
$\det N(\mathbf {a}_X, 0_{k \times k})$
, which is easy since
$N(\mathbf {a}_X, 0_{k \times k})$
is diagonal. Similarly, the coefficient of
$n_{x_i}$
is obtained by setting all other
$n_{x_{i'}} = 0$
in
$\det N(\mathbf {a}_X,\mathcal {D}_X)$
. Expand along the ith column to compute this determinant; now adding these determinants over all i yields the claimed formula for the linear term.
As a further refinement of Theorem B, we isolate every term in the multi-affine polynomial
$p_X(\mathbf {n})$
. Two consequences follow: (a) a formula relating the blowup-polynomials for a metric space X and its subspace Y; and (b) a sufficient condition for two monomials in
$p_X(\mathbf {n})$
to have equal coefficients. In order to state and prove these latter two results, we require the following notion.
Definition 2.5 We say that a metric subspace Y of a finite metric space
$(X,d)$
is admissible if for every
$y \in Y$
, there exists
$y' \in Y$
such that
$d(y, X \setminus \{ y \}) = d(y,y')$
.
For example, in every finite simple connected unweighted graph G with the minimum edge-distance as its metric, a subset Y of vertices is admissible if and only if the induced subgraph in G on Y has no isolated vertices.
Proposition 2.6 Notation as above.
-
(1) Given any subset
$I \subseteq \{ 1, \dots , k \}$ , the coefficient in
$p_X(\mathbf {n})$ of
$\prod _{i \in I} n_{x_i}$ is
$$\begin{align*}\det (\mathcal{D}_X)_{I \times I} \prod_{j \not\in I} (-2 d(x_j, X \setminus \{ x_j \})) = \det (\mathcal{D}_X)_{I \times I} \prod_{j \not\in I} (-d_{jj}), \end{align*}$$
$(\mathcal {D}_X)_{I \times I}$ the principal submatrix of
$\mathcal {D}_X$ formed by the rows and columns indexed by I.
-
(2) Suppose
$I \subseteq \{ 1, \dots , k \}$ , and
$Y = \{ x_i : i \in I \}$ is an admissible subspace of X. Then,
$$\begin{align*}p_Y(\{ n_{x_i} : i \in I \}) = p_X(\mathbf{n})|_{n_{x_j} = 0\; \forall j \not\in I} \cdot \prod_{j \not\in I} (-2 d(x_j, X \setminus \{ x_j \}))^{-1}. \end{align*}$$
$\prod _{i \in I_0} n_{x_i}$ does not occur in
$p_Y(\cdot )$ for some
$I_0 \subseteq I$ , then it does not occur in
$p_X(\cdot )$ either.
-
(3) Suppose two admissible subspaces of X, consisting of points
$(y_1, \dots , y_l)$ and
$(z_1, \dots , z_l)$ , are isometric (here,
$1 \leqslant l \leqslant k$ ). If moreover
(2.3)then the coefficients in$$ \begin{align} \prod_{i=1}^l d(y_i, X \setminus \{ y_i \}) = \prod_{i=1}^l d(z_i, X \setminus \{ z_i \}), \end{align} $$
$p_X(\mathbf {n})$ of
$\prod _{i=1}^l n_{y_i}$ and
$\prod _{i=1}^l n_{z_i}$ are equal.
The final assertion strengthens the (obvious) observation that if
$\Psi : X \to X$
is an isometry, then
$p_X(\cdot ) \equiv p_{\Psi (X)}(\cdot )$
– in other words, the polynomial
$p_X(\cdot )$
is invariant under the action of the permutation of the variables
$( n_x : x \in X )$
induced by
$\Psi $
. This final assertion applies to blowup-polynomials of unweighted graphs with “locally homeomorphic neighborhoods,” e.g., to interior points and intervals in path graphs (or more generally, banded graphs). See the opening discussion in Section 4.3, as well as Proposition 4.11.
Proof
-
(1) It suffices to compute the coefficient of
$\prod _{i \in I} n_{x_i}$ in
$p_X(\mathbf {n}) = \det N(\mathbf {a}_X,\mathcal {D}_X)$ , where
$a_i = -2 d(x_i, X \setminus \{ x_i \})\ \forall 1 \leqslant i \leqslant k$ , and we set all
$n_{x_j},\ j \not \in I$ to zero. To evaluate this determinant, notice that for
$j \not \in I$ , the jth row contains only one nonzero entry, along the main diagonal. Thus, expand the determinant along the jth row for every
$j \not \in I$ ; this yields
$\prod _{j \not \in I} (-d_{jj})$ times the principal minor
$N(\mathbf {a}_X,\mathcal {D}_X)_{I \times I}$ . Moreover, the coefficient of
$\prod _{i \in I} n_{x_i}$ in the expansion of
$\det N(\mathbf {a}_X,\mathcal {D}_X)_{I \times I}$ is the same as that in expanding
$\det N(\mathbf {0}, \mathcal {D}_X)_{I \times I}$ , and this is precisely
$\det (\mathcal {D}_X)_{I \times I}$ .
-
(2) Let us use
$\mathbf {a}_X, \mathcal {D}_X$ and
$\mathbf {a}_Y, \mathcal {D}_Y$ for the appropriate data generated from X and
$Y,$ respectively. Then the admissibility of Y indicates that
$(\mathbf {a}_X)_I = \mathbf {a}_Y$ and
$(\mathcal {D}_X)_{I \times I} = \mathcal {D}_Y$ . Now a direct computation reveals:
$$\begin{align*}p_X(\mathbf{n})|_{n_{x_j} = 0\; \forall j \not\in I} = \det(\Delta_{\mathbf{a}_Y} + \Delta_{\mathbf{n}_Y} \mathcal{D}_Y) \prod_{j \not\in I} (-d_{jj}). \end{align*}$$
-
(3) Let
$I', I" \subseteq \{ 1, \dots , k \}$ index the points
$(y_1, \dots , y_l)$ and
$(z_1, \dots , z_l)$ , respectively. Similarly, let
$\mathcal {D}_Y, \mathcal {D}_Z$ denote the respective
$l \times l$ matrices (e.g., with off-diagonal entries
$d(y_i, y_j)$ and
$d(z_i, z_j),$ respectively). The admissibility of the given subspaces implies that
$(\mathcal {D}_X)_{I' \times I'} = \mathcal {D}_Y$ and
$(\mathcal {D}_X)_{I" \times I"} = \mathcal {D}_Z$ . Now use the isometry between the
$y_i$ and
$z_i$ (up to relabeling) to deduce that
$\det \mathcal {D}_Y = \det \mathcal {D}_Z$ . Via the first part above, it remains to prove that
$$\begin{align*}\prod_{j \not\in I'} (-2 d(x_j, X \setminus \{ x_j \})) = \prod_{j \not\in I"} (-2 d(x_j, X \setminus \{ x_j \})). \end{align*}$$
$2^{-l} \prod _{x \in X} (-2d(x, X \setminus \{ x \}))$ on both sides (once again using admissibility). here
We provide some applications of Proposition 2.6 in later sections; for now, we apply it to prove that the blowup delta-matroid of X is linear.
Proof of Corollary 1.10
It is immediate from Proposition 2.6(1) that the blowup delta-matroid of X is precisely the linear delta-matroid
$\mathcal {M}_{\mathcal {D}_X}$
(see the paragraph preceding Corollary 1.10).
We conclude this section by showing another result in the introduction, which studies when
$p_X(\mathbf {n})$
is symmetric in the variables
$n_x$
.
Proof of Proposition 1.6
First suppose
$d_X$
is the discrete metric times a constant
$c> 0$
. Then all
$a_i = -2c = d_{ii}$
. Hence,

and this is a rank-one update of the diagonal matrix
$\mathbf {\Delta } := c \operatorname {\mathrm {diag}}(n_{x_1}, \dots , n_{x_k}) -2c \operatorname {\mathrm {Id}}_k$
. Hence,

and this is indeed symmetric in the
$n_{x_i}$
.
Conversely, suppose
$p_X(\mathbf {n})$
is symmetric in
$\mathbf {n}$
. If
$|X| = k \leqslant 2,$
then the result is immediate. Also note that the assertion (2) for
$k \geqslant 3$
follows from that for
$k=3$
– since if the distances between any three distinct points are equal, then
$d(x,y) = d(x,y') = d(x',y')$
for all distinct
$x,y,x',y' \in X$
(verifying the remaining cases is easier). Thus, we suppose henceforth that
$|X| = k = 3$
. For ease of exposition, in this proof, we denote
$d^{\prime }_{ij} := d(x_i, x_j)$
for
$1 \leqslant i,j \leqslant 3$
. Also assume by relabeling the
$x_i$
(if needed) that
$0 < d^{\prime }_{12} \leqslant d^{\prime }_{13} \leqslant d^{\prime }_{23}$
. Then

Since
$p_X(\mathbf {n}) = \det N(\mathbf {a}_X,\mathcal {D}_X)$
is symmetric in the
$n_{x_i}$
, we equate the coefficients of
$n_{x_1} n_{x_2}$
and
$n_{x_2} n_{x_3}$
, to obtain

Simplifying this yields:
$d^{\prime }_{12} d^{\prime }_{13} = (d^{\prime }_{23})^2$
, and since
$d^{\prime }_{23}$
dominates
$d^{\prime }_{12}, d^{\prime }_{13}$
, the three distances
$d^{\prime }_{12}, d^{\prime }_{13}, d^{\prime }_{23}$
are equal. This proves the converse for
$|X| = k = 3$
, hence for all
$k \geqslant 3$
.
3 Real-stability of the blowup-polynomial
The proofs in Section 2 were mostly algebraic in nature: although they applied to metric spaces, all but the final proof involved no inequalities. We now show Theorem C:
$p_X(\cdot )$
is always real-stable.
We begin by mentioning some properties with respect to which blowups behave well. These include iterated blowups, the blowup-polynomial, and the modified distance matrix
$\mathcal {D}_X$
and its positivity. (As Theorem A indicates, the property of being Euclidean is not such a property.) We first introduce another “well-behaved” matrix
$\mathcal {C}_X$
for a finite metric space, parallel to
$\mathcal {D}_X$
and the vector
$\mathbf {a}_X$
, which will be useful here and in later sections.
Definition 3.1 Given a finite metric space
$X = \{ x_1, \dots , x_k \}$
, recall the vector
$\mathbf {a}_X \in \mathbb R^k$
as in (2.2) and define the symmetric matrix
$\mathcal {C}_X \in \mathbb R^{k \times k}$
, via

In other words,
$-\mathbf {a}_X$
is the diagonal vector of the modified distance matrix
$\mathcal {D}_X$
, and

Lemma 3.2 Fix a finite metric space
$(X,d)$
and an integer tuple
$\mathbf {n} = (n_x : x \in X) \in \mathbb {Z}_{>0}^X$
.
-
(1) Fix a positive integer
$m_{xi}$ for each
$x \in X$ and
$1 \leqslant i \leqslant n_x$ , and let
$\mathbf {m} := (m_{xi})_{x,i}$ denote the entire collection. Then
$(X[\mathbf {n}])[\mathbf {m}]$ is isometrically isomorphic to
$X[\mathbf {n}']$ , where
$\mathbf {n}' = (\sum _{i=1}^{n_x} m_{xi} : x \in X)$ . Here, the ith copy of x in
$X[\mathbf {n}]$ is copied
$m_{xi}$ times in
$(X[\mathbf {n}])[\mathbf { m}]$ .
-
(2) In particular, the blowup-polynomial of an iterated blowup is simply the original blowup-polynomial in a larger number of variables, up to a constant:
(3.3)where the coordinates of$$ \begin{align} p_{X[\mathbf{n}]}(\mathbf{m}) \equiv p_X(\mathbf{n}') \prod_{x \in X} a_x^{n_x - 1}, \end{align} $$
$\mathbf {n}' = (\sum _{i=1}^{n_x} m_{xi} : x \in X)$ are sums of variables.
-
(3) Now write
$X = \{ x_1, \dots , x_k \}$ as above. Then the matrices
$\mathcal {D}_{X[\mathbf {n}]}, \mathcal {C}_{X[\mathbf {n}]}$ are both block
$k \times k$ matrices, with
$(i,j)$ block, respectively, equal to
$$\begin{align*}d_{ij} \mathbf{1}_{n_{x_i} \times n_{x_j}} \quad \text{and} \quad c_{ij} \mathbf{1}_{n_{x_i} \times n_{x_j}}, \end{align*}$$
$\mathcal {D}_X = (d_{ij})_{i,j=1}^k, \mathcal {C}_X = (c_{ij})_{i,j=1}^k$ .
-
(4) The following are equivalent:
-
(a) The matrix
$\mathcal {D}_X$ is positive semidefinite.
-
(b) The matrix
$\mathcal {D}_{X[\mathbf {n}]}$ is positive semidefinite for some (equivalently, every) tuple
$\mathbf {n}$ of positive integers.
-
(c) The matrix
$\mathcal {C}_X$ is positive semidefinite.
-
(d) The matrix
$\mathcal {C}_{X[\mathbf {n}]}$ is positive semidefinite for some (equivalently, every) tuple
$\mathbf {n}$ of positive integers.
-
Proof
-
(1) In studying
$(X[\mathbf {n}])[\mathbf {m}]$ , for ease of exposition, we write
$Y := X[\mathbf {n}], Z := (X[\mathbf {n}])[\mathbf {m}]$ . Also write
$y_{xi}$ for the ith copy of x in Y, and
$z_{xij}$ for the jth copy of
$y_{xi}$ in Z, with
$1 \leqslant i \leqslant n_x$ and
$1 \leqslant j \leqslant m_{xi}$ . We now compute
$d_Z(z_{xij},z_{x'i'j'})$ , considering three cases. First, if
$x \neq x'$ , then this equals
$d_Y(y_{xi},y_{x'i'}) = d_X(x,x')$ . Next, if
$x = x'$ but
$i \neq i'$ , then it equals
$d_Y(y_{xi}, y_{xi'}) = 2 d(x, X \setminus \{ x \})$ . Finally, suppose
$x = x'$ and
$i = i'$ but
$j \neq j'$ . Then
$$\begin{align*}d_Z(z_{xij},z_{x'i'j'}) = 2 d_Y(y_{xi}, Y \setminus \{ y_{xi} \}), \end{align*}$$
$2 d_X(x, X \setminus \{ x \})$ . These three cases reveal that
$d_Z(z_{xij}, z_{x'i'j'})$ equals the distance in
$X[\mathbf {n}']$ between the copies of
$x,x' \in X$ , and the proof is complete.
-
(2) We show (3.3) using the previous part and the next part, and via Zariski density arguments as in the proof of Theorem 2.3. Define
$n_j := n_{x_j}$ in this proof for convenience. Thus, we work more generally in the setting where
$X = \{ x_1, \dots , x_k \}$ , but the arrays
$$\begin{align*}\mathbf{a}_X = (a_{x_1}, \dots, a_{x_k})^T, \qquad \mathcal{D}_X = (d_{rs})_{r,s=1}^k, \qquad \mathbf{m} = (m_{j1}, \dots, m_{jn_j})_{j=1}^k \end{align*}$$
$K := \sum _{j=1}^k n_j$ , and define
$\mathcal {W}_{K \times k}$ to be the block matrix
$$\begin{align*}\mathcal{W} := \begin{pmatrix} \mathbf{1}_{n_1 \times 1} & 0_{n_1 \times 1} & \ldots & 0_{n_1 \times 1} \\ 0_{n_2 \times 1} & \mathbf{1}_{n_2 \times 1} & \ldots & 0_{n_2 \times 1} \\ \vdots & \vdots & \ddots & \vdots \\ 0_{n_k \times 1} & 0_{n_k \times 1} & \ldots & \mathbf{1}_{n_k \times 1} \end{pmatrix}. \end{align*}$$
$\Delta _{\mathbf {a}_{X[\mathbf {n}]}} = \operatorname {\mathrm {diag}}( a_{x_1} \operatorname {\mathrm {Id}}_{n_1}, \dots , a_{x_k} \operatorname {\mathrm {Id}}_{n_k})$ , and a straightforward computation (using the next part) shows that
$\mathcal {D}_{X[\mathbf {n}]} = \mathcal {W} \mathcal {D}_X \mathcal {W}^T$ .
Notice that if one works over the field
$$\begin{align*}\mathbb{Q}(\{ a_{x_j}, m_{ji} : 1 \leqslant j \leqslant k, \ 1 \leqslant i \leqslant n_j \}, \{ d_{rs} : 1 \leqslant r,s \leqslant k \}), \end{align*}$$
(3.4)$$ \begin{align} (\det \mathcal{D}_X) \prod_{j=1}^k a_{x_j} \prod_{j=1}^k \prod_{i=1}^{n_j} m_{ji}. \end{align} $$
Thus, we now compute:
$$\begin{align*}p_{X[\mathbf{n}]}(\mathbf{m}) = \det (\Delta_{a_{X[\mathbf{n}]}} + \Delta_{\mathbf{m}} \mathcal{D}_{X[\mathbf{n}]}) = \det (\Delta_{a_{X[\mathbf{n}]}} + \Delta_{\mathbf{m}} \mathcal{W} \mathcal{D}_X \mathcal{W}^T). \end{align*}$$
$$\begin{align*}\det (\Delta_{\mathbf{m}}) \cdot \det \begin{pmatrix} \Delta_{\mathbf{m}}^{-1} \Delta_{a_{X[\mathbf{n}]}} & -\mathcal{W} \\ \mathcal{W}^T & \mathcal{D}_X^{-1} \end{pmatrix} \det (\mathcal{D}_X). \end{align*}$$
Using an alternate Schur complement, we expand this latter expression as
$$\begin{align*}\det (\Delta_{\mathbf{m}}) \cdot \det (\Delta_{\mathbf{m}}^{-1}) \det (\Delta_{a_{X[\mathbf{n}]}}) \det( \mathcal{D}_X^{-1} + \mathcal{W}^T \Delta_{\mathbf{m}} \Delta_{a_{X[\mathbf{n}]}}^{-1} \mathcal{W}) \cdot \det(\mathcal{D}_X). \end{align*}$$
Now defining
$n^{\prime }_j := \sum _{i=1}^{n_j} m_{ji}$ as in the assertion, we have
$$\begin{align*}\mathcal{W}^T \Delta_{\mathbf{m}} \Delta_{a_{X[\mathbf{n}]}}^{-1} \mathcal{W} = \operatorname{\mathrm{diag}}(a_{x_1}^{-1} n^{\prime}_1, \dots, a_{x_k}^{-1} n^{\prime}_k) = \Delta_{a_X}^{-1} \Delta_{\mathbf{n}'}. \end{align*}$$
$$ \begin{align*} p_{X[\mathbf{n}]}(\mathbf{m}) = &\ \det(\Delta_{a_{X[\mathbf{n}]}}) \det(\mathcal{D}_X^{-1} + \Delta_{a_X}^{-1} \Delta_{\mathbf{n}'}) \det(\mathcal{D}_X)\\ = &\ \prod_{j=1}^k a_{x_j}^{n_j} \cdot \det(\operatorname{\mathrm{Id}}_k + \Delta_{a_X}^{-1} \Delta_{\mathbf{n}'} \mathcal{D}_X)\\ = &\ \prod_{j=1}^k a_{x_j}^{n_j - 1} \cdot \det(\Delta_{a_X} + \Delta_{\mathbf{n}'} \mathcal{D}_X) = p_X(\mathbf{n}') \prod_{j=1}^k a_{x_j}^{n_j - 1}. \end{align*} $$
This proves the result over the function field (over
$\mathbb {Q}$ ) in which the entries
$a_{x_j}, m_{ji}, d_{rs}$ are variables. Now, we repeat the Zariski density arguments as in the proof of Theorem 2.3, working this time with the nonzero polynomial given in (3.4). This shows the result over an arbitrary commutative ring – in particular, over
$\mathbb R$ .
-
(3) The key observation is that the diagonal entries of
$\mathcal {D}_{X[\mathbf {n}]}$ corresponding to the copies of
$x \in X$ , all equal
$2d_X(x, X \setminus \{ x \})$ , which is precisely the corresponding diagonal entry in
$\mathcal {D}_X$ . From this, the assertion for
$\mathcal {D}_{X[\mathbf {n}]}$ is immediate, and that for
$\mathcal {C}_{X[\mathbf {n}]}$ is also straightforward.
-
(4) We first prove the equivalence for the
$\mathcal {D}$ -matrices. The preceding part implies that
$\mathcal {D}_X$ is a principal submatrix of
$\mathcal {D}_{X[\mathbf {n}]}$ , hence is positive semidefinite if
$\mathcal {D}_{X[\mathbf {n}]}$ is. Conversely, given
$v \in \mathbb R^{n_{x_1} + \cdots + n_{x_k}}$ , write
$v^T = (v_1^T, \dots , v_k^T)$ , with all
$v_i \in \mathbb R^{n_{x_i}}$ . Let
$w_i := v_i^T \mathbf {1}_{n_{x_i}}$ , and denote by
$w := (w_1, \dots , w_k)^T$ the “compression” of v. Now compute
$$\begin{align*}v^T \mathcal{D}_{X[\mathbf{n}]} v = \sum_{i,j=1}^k v_i^T d_{ij} \mathbf{1}_{n_{x_i} \times n_{x_j}} v_j = \sum_{i,j=1}^k w_i d_{ij} w_j = w^T \mathcal{D}_X w, \end{align*}$$
$\mathcal {D}_X$ is positive semidefinite. Hence so is
$\mathcal {D}_{X[\mathbf {n}]}$ .
This proves the equivalence for the
$\mathcal {D}$ -matrices. Now for any metric space Y (e.g.,
$Y = X$ or
$X[\mathbf {n}]$ ), the matrix
$\mathcal {C}_Y = (-\Delta _{\mathbf {a}_Y})^{-1/2} \mathcal {D}_Y (-\Delta _{\mathbf {a}_Y})^{-1/2}$ is positive semidefinite if and only if
$\mathcal {D}_Y$ is. This concludes the proof.
Remark 3.3 The proof of Lemma 3.2(2) using Zariski density indicates a similar, alternate approach to proving the formula for
$\det M(\mathbf {A}, \mathbf {D})$
in Theorem 2.3. The difference, now, is that the rank-one expansion of the matrix
$\mathbf {D}$
is no longer needed, and can be replaced by the use of the two block-diagonal matrices

and a similar matrix
$\mathcal {W}(\mathbf {q}_1, \dots , \mathbf {q}_k)$
, so that
$M(\mathbf {A}, \mathbf {D}) = \operatorname {\mathrm {diag}}(\{ A_i \cdot \operatorname {\mathrm {Id}}_{n_i} \}) + \mathcal {W}(\{ \mathbf {p}_i \}) \cdot \mathbf {D} \cdot \mathcal {W}(\{ \mathbf {q}_i \})^T$
.
Lemma 3.2(2) immediately implies the following consequence (which can also be shown directly).
Corollary 3.4 Fix a finite metric space
$(X,d)$
. For all integer tuples
$\mathbf {n} \in \mathbb {Z}_{>0}^X$
, the blowup-polynomial of
$X[\mathbf {n}]$
has total degree at most
$|X|$
.
In other words, no monomials of degree
$|X|+1$
or higher occur in
$p_{X[\mathbf {n}]}$
, for any tuple
$\mathbf {n}$
.
We now prove the real-stability of
$p_X(\cdot )$
.
Proof of Theorem C
We continue to use the notation in the proof of Theorem B, with one addition: for expositional clarity, in this proof, we treat
$p_X(\cdot )$
as a polynomial in the complex variables
$z_j := n_{x_j}$
for
$j=1,\dots ,k$
. Thus,

where
$a_j = -2 d(x_j, X \setminus \{ x_j \}) < 0\ \forall j$
and
$\Delta _{\mathbf {z}} := \operatorname {\mathrm {diag}}(z_1, \dots , z_k)$
. We compute

where
$E_{jj}$
is the elementary
$k \times k$
matrix with
$(j,j)$
entry
$1$
and all other entries zero.
We now appeal to two facts. The first is a well-known result of Borcea–Brändén [Reference Borcea and Brändén5, Proposition 2.4] (see also [Reference Brändén11, Lemma 4.1]), which says that if
$A_1, \dots , A_k, B$
are equi-dimensional real symmetric matrices, with all
$A_j$
positive semidefinite, then the polynomial

is either real-stable or identically zero. The second is the folklore result that “inversion preserves stability” (since the upper half-plane is preserved under the transformation
$z \mapsto -1/z$
of
$\mathbb {C}^\times $
). That is, if a polynomial
$g(z_1, \dots , z_k)$
has
$z_j$
-degree
$d_j \geqslant 1$
and is real-stable, then so is the polynomial

(this actually holds for any
$z_j$
). Apply this latter fact to each variable of the multi-affine polynomial
$f(\cdot )$
in (3.5) – in which
$d_j=1$
,
$B = \mathcal {C}_X$
, and
$A_j = E_{jj}\ \forall j$
. It follows that the polynomial

is real-stable, and the proof is complete.
Remark 3.5 For completeness, we briefly touch upon other notions of stability that are standard in mathematics (analysis, control theory, differential/difference equations): Hurwitz stability and Schur stability. Recall that a real polynomial in one variable is said to be Hurwitz stable (resp. Schur stable) if all of its roots lie in the open left half-plane (resp. in the open unit disk) in
$\mathbb {C}$
. Now the univariate specializations
$u_X(n) = p_X(n,n,\dots ,n)$
are not all either Hurwitz or Schur stable. As a concrete example: in the simplest case of the discrete metric on a space X, equation (2.4) implies that
$u_X(n) = (n-2)^{k-1} (n-2 + kn)$
, and this vanishes at
$n=2, \frac {2}{k+1}$
.
4 Combinatorics: graphs and their partially symmetric blowup-polynomials
We now take a closer look at a distinguished sub-class of finite metric spaces: unweighted graphs. In this section, we will show Theorems D–F. To avoid having to mention the same quantifiers repeatedly, we introduce the following definition (used in the opening section).
Definition 4.1 A graph metric space is a finite, simple, connected, unweighted graph G, in which the distance between two vertices is the number of edges in a shortest path connecting them.
Every graph metric space G is thus a finite metric space, and so the results in the previous sections apply to it. In particular, to every graph metric space
$G = (V,E)$
are naturally associated a (to the best of our knowledge) novel graph invariant

(which we showed is real-stable), as well as its univariate specialization (which is thus real-rooted)

and its “maximum root”
$\alpha _{\max }(u_G) \in \mathbb R$
. Here,
$D_G$
is the distance matrix of G (with zeros on the diagonal) and
$\mathcal {D}_G = D_G + 2 \operatorname {\mathrm {Id}}_V$
is the modified distance matrix.
4.1 Connections to the distance spectrum;
$p_G$
recovers G
We begin with an observation (for completeness), which ties into one of our original motivations by connecting the blowup-polynomial
$u_G$
to the distance spectrum of G, i.e., to the eigenvalues of the distance matrix
$D_G$
. The study of these eigenvalues began with the work of Graham and Lovász [Reference Graham and Lovász20], and by now, is a well-developed program in the literature (see, e.g., [Reference Aouchiche and Hansen3]). Our observation here is the following.
Proposition 4.2 Suppose
$G = (V,E)$
is a graph metric space. A real number n is a root of the univariate blowup-polynomial
$u_G$
, if and only if
$2n^{-1} - 2$
is an eigenvalue of the distance matrix
$D_G$
, with the same multiplicity.
Alternately,
$\lambda \neq -2$
is an eigenvalue of
$D_G$
if and only if
$\frac {2}{2 + \lambda }$
is a root of
$u_G$
.
Proof First, note from the definitions that
$u_G(0) = \det (-2 \operatorname {\mathrm {Id}}_V) \neq 0$
. We now compute

Thus, n is a (nonzero) root of
$u_G$
if and only if
$n^{-1}$
is an eigenvalue of
$\operatorname {\mathrm {Id}}_V + \frac {1}{2} D_G$
. The result follows from here.
In the distance spectrum literature, much work has gone into studying the largest eigenvalue of
$D_G$
, called the “distance spectral radius” in the literature, as well as the smallest eigenvalue of
$D_G$
. An immediate application of Proposition 4.2 provides an interpretation of another such eigenvalue.
Corollary 4.3 The smallest eigenvalue of
$D_G$
which is strictly greater than
$-2$
, is precisely
$\frac {2}{\alpha _{\max }(u_G)} - 2$
.
We refer the reader to further discussions about
$\alpha _{\max }(u_G)$
in and around Proposition A.4.
Following these observations that reinforce our motivating connections between distance spectra and the blowup-polynomial, we now move on to the proof of Theorem D. Recall, this result shows that (the homogeneous quadratic part of)
$p_G$
recovers/detects the graph and its isometries – but
$u_G$
does not do so.
Proof of Theorem D
We prove the various assertions in serial order. One implication for the first assertion was described just above the theorem-statement. Conversely, suppose
$p_G(\mathbf {n}) \equiv p_{\Psi (G)}(\mathbf {n})$
. Fix vertices
$v \neq w \in V$
, and equate the coefficient of
$n_v n_w$
on both sides using Proposition 2.6:

since
$d_G(v, V \setminus \{ v \}) = 1\ \forall v \in V$
. Thus
$d(\Psi (v), \Psi (w)) = d(v,w)$
for all
$v, w \in V$
, so
$\Psi $
is an isometry.
The second assertion is shown as follows. By Proposition 2.6, the vertex set can be obtained from the nonzero monomials
$n_v n_w$
(since every edge yields a nonzero monomial). In particular,
$|V|$
is recovered. Again by Proposition 2.6, there is a bijection between the set of edges
$v \sim w$
in G and the monomials
$n_v n_w$
in
$p_G(\mathbf {n})$
with coefficient
$3(-2)^{|V|-2}$
. Thus, all quadratic monomials in
$p_G(\mathbf {n})$
with this coefficient reveal the edge set of G as well.
Finally, to show that
$u_G$
does not detect the graph G, consider the two graphs
$H,K$
in Figure 1.

Figure 1: Two non-isometric graphs on six vertices with co-spectral blowups.
Both graphs have vertex sets
$\{ 1, \dots , 6 \}$
, and are not isomorphic. Now define (see Remark 4.4):

Then
$H', K'$
are not isometric, but a direct computation reveals:

Remark 4.4 The graphs
$H',K'$
in the preceding proof were not accidental or providential, but stem from the recent paper [Reference Drury and Lin18], which is part of the literature on exploring which graphs are distance co-spectral (see the Introduction). In the discussion preceding [Reference Drury and Lin18, Figure 1], the authors verified that the graphs
$H' \not \cong K'$
used in the preceding proof are indeed distance co-spectral. This result, combined with Proposition 4.2, leads to the above use of
$H', K'$
in proving that
$u_G$
cannot detect G up to isometry.
Remark 4.5 As the proof of Theorem D reveals, for any graph metric space
$G = (V,E)$
, the Hessian of the blowup-polynomial carries the same information as the matrix
$\mathcal {D}_G \in \mathbb {Z}_{>0}^{V \times V}$
:

where
$\mathcal {D}_G^{\circ 2}$
is the entrywise square of the modified distance matrix
$\mathcal {D}_G$
.
4.2 Complete multipartite graphs via real-stability
The next result we show is Theorem E (described in the title of this subsection). Before doing so, we define the three classes of polynomials alluded to in Corollary 1.7, as promised there (and for the self-sufficiency of this paper).
-
(1) Brändén–Huh [Reference Brändén and Huh12] defined a polynomial
$p \in \mathbb R[x_1, \dots , x_k]$ to be Lorentzian if
$p(\cdot )$ is homogeneous of some degree d, has nonnegative coefficients, and given any indices
$0 \leqslant j_1, \dots , j_{d-2} \leqslant k$ , if
$$\begin{align*}g(x_1, \dots, x_k) := \left( \partial_{x_{j_1}} \ldots \partial_{x_{j_{d-2}} } p \right)(x_1, \dots, x_k), \end{align*}$$
$\mathcal {H}_g := (\partial _{x_i} \partial _{x_j} g)_{i,j=1}^k \in \mathbb R^{k \times k}$ is Lorentzian. (This last term means that
$\mathcal {H}_g$ is nonsingular and has exactly one positive eigenvalue.)
-
(2) Suppose
$p \in \mathbb R[x_1, \dots , x_k]$ has nonnegative coefficients. Gurvits [Reference Gurvits, Kotsireas and Zima23] defined p to be strongly log-concave if for all
$\alpha \in \mathbb {Z}_{\geqslant 0}^k$ , either the derivative
$\displaystyle \partial ^\alpha (p) := \prod _{i=1}^k \partial _{x_i}^{\alpha _i} \cdot p$ is identically zero, or
$\log (\partial ^\alpha (p))$ is defined and concave on
$(0,\infty )^k$ .
-
(3) Suppose
$p \in \mathbb R[x_1, \dots , x_k]$ has nonnegative coefficients. Anari, Oveis Gharan, and Vinzant [Reference Anari, Oveis Gharan and Vinzant2] defined p to be completely log-concave if for all integers
$m \geqslant 1$ and matrices
$A = (a_{ij}) \in [0,\infty )^{m \times k}$ , either the derivative
$\displaystyle \partial _A (p) := \prod _{i=1}^m \left ( \sum _{j=1}^k a_{ij} \partial _{x_j} \right ) \cdot p$ is identically zero, or
$\log (\partial _A (p))$ is defined and concave on
$(0,\infty )^k$ .
Having written these definitions, we proceed to the main proof.
Proof of Theorem E
We prove the cyclic chain of implications:

We begin with a short proof of
$(1) \implies (2)$
via Lorentzian polynomials from Corollary 1.7. It was shown in [Reference Brändén and Huh12, pp. 828–829] that if
$(1)$
holds then
$\widetilde {p}_G$
is Lorentzian (see also [Reference Choe, Oxley, Sokal and Wagner15, Theorem 6.1]), and in turn, this implies
$(2)$
by definition (or by loc. cit.).
We next show that
$(3) \implies (2)$
. Observe that

Now if
$(3)$
holds, then
$\widetilde {p}_G(1,1,\dots ,1) = (-1)^k p_G(-1, \dots , -1)> 0$
, so the polynomial

has all coefficients nonnegative, using
$(3)$
and (4.4). Since
$p_G(\cdot )$
is multi-affine (or by inspecting the form of
$\widetilde {p}_G(\cdot )$
), this shows
$(3) \implies (2)$
. Now to show
$\{ (1), (2) \} \implies (3)$
, note that the sum of all coefficients in
$\widetilde {p}_G(\cdot )$
equals

and by
$(2)$
, this dominates the “constant term” of
$p_G$
, i.e.,

In particular,
$(-1)^k p_G(-1,\dots ,-1)> 0$
, proving a part of
$(3)$
. Hence using
$(2)$
and (4.4), all coefficients of the “reflected” polynomial are nonnegative; and the normalization shows that the coefficients sum to
$1$
. It remains to show that the “reflected” polynomial
$p_G(-\mathbf {z}) / p_G(-1,\dots ,-1)$
is real-stable. Once again, using (4.4) and that
$(-1)^k p_G(-1,\dots ,-1)> 0$
, it suffices to show that
$\widetilde {p}_G(1,z_1, \dots , z_k)$
is real-stable. But this follows from
$(1)$
by specializing to
$z_0 \mapsto 1 \in \mathbb R$
. This finally shows that
$(1)$
and
$(2)$
together imply
$(3)$
.
We next show the equivalence of
$(4)$
and
$(5)$
. If
$G = K_k$
, then
$\mathcal {D}_G = \operatorname {\mathrm {Id}}_k + \mathbf {1}_{k \times k}$
is positive semidefinite. Hence so is
$\mathcal {D}_{K_k[\mathbf {n}]}$
for all
$\mathbf {n}$
, by Lemma 3.2(4). The converse follows from [Reference Lin, Hong, Wang and Shu28, Theorem 1.1], since
$\mathcal {D}_G = D_G + 2 \operatorname {\mathrm {Id}}_{|V(G)|}$
.
Finally, we will show
$(2) \implies (4) \implies (1)$
. First, assume (2)
$\widetilde {p}_G(\cdot )$
has nonnegative coefficients. Fix a subset
$J \subseteq \{ 1, \dots , k \}$
; using Proposition 2.6(1), the coefficient of
$z_0^{k - |J|} \prod _{j \in J} z_j$
equals

By the hypotheses, this expression is nonnegative for every
$J \subseteq \{ 1, \dots , k \}$
. Hence,
$\mathcal {D}_G$
has all principal minors nonnegative (and is symmetric), so is positive semidefinite, proving (4).
Finally, if (4)
$\mathcal {D}_G$
is positive semidefinite, then so is