Spectral gap in random bipartite biregular graphs and applications

We prove an analogue of Alon's spectral gap conjecture for random bipartite, biregular graphs. We use the Ihara-Bass formula to connect the non-backtracking spectrum to that of the adjacency matrix, employing the moment method to show there exists a spectral gap for the non-backtracking matrix. A byproduct of our main theorem is that random rectangular zero-one matrices with fixed row and column sums are full-rank with high probability. Finally, we illustrate applications to community detection, coding theory, and deterministic matrix completion.


Introduction
Random regular graphs, where each vertex has the same degree d, are among the most wellknown examples of expanders: graphs with high connectivity and which exhibit rapid mixing. Expanders are of particular interest in computer science, from sampling and complexity theory to design of error-correcting codes. For an extensive review of their applications, see Hoory, Linial, and Wigderson [2006]. What makes random regular graphs particularly interesting expanders is the fact that they exhibit all three existing types of expansion properties: edge, vertex, and spectral.
The study of regular random graphs took off with the work of Bender [1974], Bender and Canfield [1978], Bollobás [1980], and slightly later McKay [1984] and Wormald [1981]. Most often, their expanding properties are described in terms of the existence of the spectral gap, which we define below.
Let A be the adjacency matrix of a simple graph, where A ij = 1 if i and j are connected and zero otherwise. Denote σ(A) = {λ 1 ≥ λ 2 ≥ . . .} as its spectrum. For a random d-regular graph, λ 1 = max i |λ i | = d, but the second largest eigenvalue η = max(|λ 2 |, |λ n |) is asymptoticly almost surely of much smaller order, leading to a spectral gap. Note that we will always use η to be the second largest eigenvalue of the adjacency matrix A. For a list of important symbols see Appendix A.
Spectral expansion properties of a graph are, strictly speaking, defined with respect to the smallest nonzero eigenvalue of the normalized Laplacian, L = I − D −1/2 AD −1/2 , where I is the identity and D is the diagonal matrix of vertex degrees. In the case of a d-regular graph, σ(L) is a scaled and shifted version of σ(A). Thus, a spectral gap for A translates directly into one for L.
The study of the second largest eigenvalue in regular graphs had a first breakthrough in the Alon-Boppana bound Alon [1986], which states that the second largest eigenvalue satisfies Graphs for which the Alon-Boppana bound is attained are called Ramanujan. Friedman [2003] proved the conjecture of Alon [1986] that almost all d-regular graphs have η ≤ 2 √ d − 1 + for any > 0 with high probability as the number of vertices goes to infinity. This result was simultaneous simplified and deepened in Friedman and Kohler [2014]. More recently, Bordenave [2015] gave a different proof that η ≤ 2 √ d − 1 + n for a sequence n → 0 as n, the number of vertices, tends to infinity; the new proof is based on the non-backtracking operator and the Ihara-Bass identity. Figure 1. The structure of a bipartite, biregular graph. There are n = |V 1 | left vertices, m = |V 2 | right vertices, each of degree d 1 and d 2 , with the constraint that nd 1 = md 2 . The distribution G(n, m, d 1 , d 2 ) is taken uniformly over all such graphs.
1.1. Bipartite biregular model. In this paper we prove the analog of Friedman and Bordenave's result for bipartite, biregular random graphs. These are graphs for which the vertex set partitions into two independent sets V 1 and V 2 , such that all edges occur between the sets. In addition, all vertices in set V i have the same degree d i . See Figure 1 for a schematic of such a graph. Along the way, we also bound the smallest positive eigenvalue and the rank of the adjacency matrix.
Let G(n, m, d 1 , d 2 ) be the uniform distribution of simple, bipartite, biregular random graphs. Any G ∼ G(n, m, d 1 , d 2 ) is sampled uniformly from the set of simple bipartite graphs with vertex set V = V 1 V 2 , with |V 1 | = n, |V 2 | = m and where every vertex in V i has degree d i . Note that we must have nd 1 = md 2 = |E|. Without any loss of generality, we will assume n ≤ m and thus d 1 ≥ d 2 when necessary. Sometimes we will write that G is a (d 1 , d 2 )-regular graph, when we want to explicitly state the degrees. Let X be the n × m matrix with entries X ij = 1 if and only if there is an edge between vertices i ∈ V 1 and j ∈ V 2 . Using the block form of the adjacency matrix It is well known that G(n, m, d 1 , d 2 ) is connected with high probability, as long as d i ≥ 3. From (1), it can be verified that all eigenvalues of A occur in pairs λ and −λ, where |λ| is a singular value of X, along with at least |n − m| zero eigenvalues. For these reasons, the second largest eigenvalue is η = λ 2 (A) = −λ n+m−1 (A). Furthermore, the leading or Perron eigenvalue of A is always √ d 1 d 2 , matched to the left by − √ d 1 d 2 , which reduces to the result for d-regular when d 1 = d 2 . We will focus on the spectrum of the adjacency matrix. Similar to the case of the d-regular graph, in the bipartite, biregular graph, the spectrum of the normalized Laplacian is a scaled and shifted version of the adjacency matrix: Because of the structure of the graph, D −1/2 AD −1/2 = 1 √ d 1 d 2 A. Therefore, a spectral gap for A again implies that one exists for L.
Previous work on bipartite, biregular graphs includes the work of Feng and Li [1996] and Li and Solé [1996], who proved the analog of the Alon-Boppana bound. For every > 0, as the number of vertices goes to infinity. This bound also follows immediately from the fact that the second largest eigenvalue cannot be asymptotically smaller than the right limit of the asymptotic support for the eigenvalue distribution, which is √ d 1 − 1 + √ d 2 − 1 and was first computed by Godsil and Mohar [1988]. They found the spectral measure µ(λ) has a point mass at λ = 0 of size 1 2 |d 1 − d 2 |/(d 1 + d 2 ) and a continuous part given by the density Graphs where η attains the Alon-Boppana bound, Eqn.
(2), are also called Ramanujan. Complete graphs are always Ramanujan but not sparse, whereas d-regular or bipartite (d 1 , d 2 )-regular graphs are sparse. Our results show that almost every (d 1 , d 2 )-regular graph is "almost" Ramanujan.
Beyond the first two eigenvalues, we should mention that Bordenave and Lelarge [2010] studied the limiting spectral distribution of large sparse graphs. They obtained a set of two coupled equations that can be solved for the eigenvalue distribution of any (d 1 , d 2 )-regular random graph. The solution of the coupled equations for fixed d 1 and d 2 shows convergence of the spectral distribution of a random regular bipartite graph to the Marčenko-Pastur law. This was first observed by Godsil and Mohar [1988]. For d 1 , d 2 → ∞ with d 1 /d 2 converging to a constant, Dumitriu and Johnson [2016] showed that the limiting spectral distribution converges to a transformed version of the Marčenko-Pastur law. When d 1 = d 2 = d, this is equal to the Kesten-McKay distribution (McKay [1981]), which becomes the semicircular law as d → ∞ (Godsil and Mohar [1988], Dumitriu and Johnson [2016]). Notably, Mizuno and Sato [2003] obtained the same results when they calculated the asymptotic distribution of eigenvalues for bipartite, biregular graphs of high girth. However, their results are not applicable to random bipartite biregular graphs as these asymptotically almost surely have low girth (Dumitriu and Johnson [2016]).
Our techniques borrow heavily from the results of Bordenave, Lelarge, and Massoulié [2015] and Bordenave [2015], who simplified the trace method of Friedman [2003] by counting non-backtracking walks built up of segments with at most one cycle, and by relating the eigenvalues of the adjacency matrix to the eigenvalues of the non-backtracking one via the Ihara-Bass identity. The combinatorial methods we use to bound the number of such walks are similar to how Brito, Dumitriu, Ganguly, Hoffman, and Tran [2015] counted self-avoiding walks in the context of community recovery in a regular stochastic block model.
Finally, we should mention that similar techniques have been employed by Coste [2017] to study the spectral gap of the Markov matrix of a random directed multigraph. The non-backtracking operator of a bipartite biregular graph could be seen as the adjacency matrix of a directed multigraph, whose eigenvalues are a simple scaling away from the eigenvalues of the Markov matrix of the same. However, the block structure of our non-backtracking matrix means that the corresponding multigraph is bipartite, and this makes it different from the model used in Coste [2017].
1.2. Configuration versus random lift model. Random lifts are a model that allows the construction of large, random graphs by repeatedly lifting the vertices of a base graph and permuting the endpoints of copied edges. See Bordenave [2015] for a recent overview. A number of spectral gap results have been obtained for random lift models, e.g. Friedman [2003], Angel, Friedman, and Hoory [2007], Friedman and Kohler [2014], and Bordenave [2015].
Random lift models are contiguous with the configuration model in very particular cases. See Section 4.1 for a definition of the configuration model; this is a useful substitute for the uniform model and is practically equivalent. For even d, random n-lifts of a single vertex with d/2 self-loops are equivalent to the d-regular configuration model. For odd d, no equivalent lift construction is known or even believed to exist.
For (d 1 , d 2 )-biregular, bipartite graphs the situation is more complicated. A celebrated result due to Marcus, Spielman, and Srivastava [2013a] showed the existence of infinite families of (d 1 , d 2 )regular bipartite graphs that are Ramanujan. That is, with η = √ d 1 − 1 + √ d 2 − 1 by taking repeated lifts of the complete bipartite graph on d 1 left and d 2 right vertices K d 1 ,d 2 . If d 1 = d 2 = d, then the configuration model is contiguous to the random lift of the multigraph with two vertices and d edges connecting then. Certainly, for a biregular bipartite graph with n/d 2 = m/d 1 = k not an integer, we cannot construct it by lifting K d 1 ,d 2 as considered by Marcus, Spielman, and Srivastava [2013a]. But even for k integer, it seems likely the two models are not contiguous, for the reasons we now explain.
Suppose there were a base graph G that could be lifted to produce any (3, 2)-biregular, bipartite graph. Consider another graph H which is a union of 2 complete bipartite graphs K 2,3 . Then H is a (3, 2)-biregular, bipartite graph and occurs in the configuration model with nonzero probability. The only G that H could be a lift of is K 2,3 , because it is a disconnected union and K 2,3 itself is not a lift of any graph or multigraph (note that 2 + 3 = 5 is prime). Therefore, G would have to be K 2,3 . Figure 2 shows an example of another graph H with the same number of vertices as H which is (3, 2)-biregular, bipartite but is not a lift of K 2,3 . Now, H and H both occur in the configuration model with equal, nonzero probability. Therefore, we cannot construct every example of a (3, 2)-biregular, bipartite graph by repeatedly lifting a single base graph G.
Since the eventual goal of any argument based on lifts that also applies to the configuration model would have to show that almost all bipartite, biregular graphs can be obtained by lifting and are sampled asymptotically uniformly from the lift model, the above considerations suggest this argument would be highly non-trivial. We in fact doubt such an argument can be made. Intuitively, in the configuration model edges occur "nearly independently," whereas for random lifts there are strong dependencies due to the fact that many edges are not allowed; see Bordenave [2015].
1.3. Structure of the paper. Briefly, we now lay out the method of proof that the bipartite, biregular random graph is Ramanujan. The proof outline is given in detail in Section 5.1, after some important preliminary terms and definitions given in Section 4. The bulk of our work builds to Theorem 3, which is actually a bound on the second eigenvalue of the non-backtracking matrix B, as explained in Section 2. The Ramanujan bound on the second eigenvalue of A then follows as Theorem 4. As a side result, we find that row-and column-regular, rectangular matrices (the offdiagonal block X of the adjacency matrix in Eqn. (1)) with aspect ratio smaller than one (d 1 = d 2 ) have full rank with high probability.
To find the second eigenvalue of B, we subtract from it a matrix S that is formed from the leading eigenvectors, and examine the spectral norm of the "almost-centered" matrixB = B − S. We then proceed to use the trace method to bound the spectral norm of the matrixB by its trace. However, sinceB is not positive definite, this leads us to consider On the right hand side, the terms inB refer to circuits built up of 2k segments, each of length + 1, since an entry B ef is a walk on two edges. Because the degrees are bounded, it turns out that, for = O(log(n)), the depth neighborhoods of every vertex contain at most one cycle-they are "tangle-free." Thus, we can bound the trace by computing the expectation of the circuits that contribute, along with an upper bound on their multiplicity, taking each segment to be -tangle-free.
Finally, to demonstrate the usefulness of the spectral gap, we highlight three applications of our bound. In Section 6, we show a community detection application. Finding communities in networks is important for the areas of social network, bioinformatics, neuroscience, among others. Random graphs offer tractable models to study when detection and recovery are possible.
We show here how our results lead to community detection in regular stochastic block models with arbitrary numbers of groups, using a very general theorem by Wan and Meilȃ [2015]. Previously, Newman and Martin [2014] studied the spectral density of such models, and the community detection problem of the special case of two groups was previously studied by Brito, Dumitriu, Ganguly, Hoffman, and Tran [2015] and Barucca [2017].
In Section 7, we examine the application to linear error correcting codes built from sparse expander graphs. This concept was first introduced by Gallager [1962] who explicitly used random bipartite biregular graphs. These "low density parity check" codes enjoyed a renaissance in the 1990s, when people realized they were well-suited to modern computers. For an overview, see Urbanke [2003, 2008]. Our result yields an explicit lower bound on the minimum distance of such codes, i.e. the number of errors that can be corrected.
The final application, in Section 8, leads to generalized error bounds for matrix completion. Matrix completion is the problem of reconstructing a matrix from observations of a subset of entries. Heiman, Schechtman, and Shraibman [2014] gave an algorithm for reconstruction of a square matrix with low complexity as measured by a norm γ 2 , which is similar to the trace norm (sum of the singular values, also called the nuclear norm or Ky Fan n-norm). The entries which are observed are at the nonzero entries of the adjacency matrix of a bipartite, biregular graph. The error of the reconstruction is bounded above by a factor which is proportional to the ratio of the leading two eigenvalues, so that a graph with larger spectral gap has a smaller generalization error. We extend their results to rectangular graphs, along the way strengthening them by a constant factor of two. The main result of the paper gives an explicit bound in terms of d 1 and d 2 .
As this paper was being prepared for submission, we became aware of the work of Deshpande, Montanari, O'Donnell, Schramm, and Sen [2018]. In their interesting paper, they use the smallest positive eigenvalue of a random bipartite lift to study convex relaxation techniques for random notall-equal-3SAT problems. It seems that our main result addresses the configuration model version of this constraint satisfaction problem, the first open question listed at the end of Deshpande, Montanari, O'Donnell, Schramm, and Sen [2018].

Non-backtracking matrix B
Given G ∼ G(n, m, d 1 , d 2 ), we define the non-backtracking operator B. This operator is a linear endomorphism of R | E| , where E is the set of oriented edges of G and | E| = 2|E|. Throughout this paper, we will use V (H), E(H), and E(H) to denote the vertices, edges, and oriented or directed edges of a graph, subgraph, or path H. For oriented edges e = (u, v), where u and v are the starting and ending vertices of e, and f = (s, t), define: We order the elements of E as {e 1 , e 2 , · · · , e 2|E| }, so that the first |E| have end point in the set V 2 .
In this way, we can write for |E| × |E| matrices B (12) and B (21) with entries equal to 0 or 1. We are interested in the spectrum of B. Denote by 1 α the vector with first |E| coordinates equal to 1 and the last |E| equal to α = . By the Perron-Frobenius Theorem, we conclude that λ 1 = λ and the associated eigenspace has dimension one. Also, one can check that if λ is an eigenvalue of B with Thus, σ(B) = −σ(B) and λ 2|E| = −λ 1 .
2.1. Connecting the spectra of A and B. Understanding the spectrum of B turns out to be a challenging question. A useful result in this direction is the following theorem proved by Bass [1992], and subsequently in Watanabe and Fukumizu [2009] and Kotani and Sunada [2000]; see also Theorem 3.3 in Angel, Friedman, and Hoory [2015].
Theorem 1 (Ihara-Bass formula). Let G = (V, E) be any finite graph and B be its non-backtracking matrix. Then where D is the diagonal matrix with D vv = d v − 1 and A is the adjacency matrix of G.
We use the Ihara-Bass formula to analyze the relationship of the spectrum of B to the spectrum of A in the case of a bipartite biregular graph. It will turn out that this relationship can be completely unpacked. From Theorem 1, we get that Note that there are precisely 2(m + n) eigenvalues of B that are determined by A, and that λ = 0 is not in the spectrum of B, since the graph has no isolated vertices (det(D) = 0).
We use the special structure of G to get a more precise description of σ(B). The matrices A and D are equal to: where I k is the k × k identity matrix. Let λ ∈ σ(B)\{−1, 1}. Then there exists a nonzero vector v such that (D − λA + λ 2 I)v = 0.  3), is shown in black. Right, we depict the spectrum of the non-backtracking matrix B for the same graph. Each eigenvalue is shown as a transparent orange circle, the leading eigenvalues are marked with blue crosses, and the eigenvalues arising from zero eigenvalues of A are marked with blue stars. Our main result, Theorem 3, proves that with high probability the non-leading eigenvalues are inside, on, or very close to the black dashed circle. In this case there are 8 outliers of the circle, which arise from 2 pairs of eigenvalues below and above the Marcenko-Pastur bulk.
Writing v = (v 1 , v 2 ) with v 1 ∈ C n , v 2 ∈ C m , we obtain: The above imply that, provided that the right hand side is non-zero, is a nonzero eigenvalue of both XX * with eigenvector v 1 and X * X, with eigenvector v 2 . We can rewrite Eqn. (6) as We will now detail how the eigenvalues of A (denoted ξ here) map to eigenvalues of B and viceversa. Let us examine the special case ξ = 0. Assume n ≤ m for simplicity. Assume that the rank of X is r. Then X has m − r independent vectors in its nullspace. Let u be one such vector. Now, if we pick v 2 = u, v 1 = 0, and λ = ±i √ d 2 − 1, Eqns. (4) and (5) are satisfied. Hence, ±i √ d 2 − 1 are eigenvalues of B, both with multiplicity m − r.
Since the rank of X is r, it follows that the nullity of X * is n − r, so there are n − r independent vectors w for which X * w = 0. Now, note that picking v 1 = w, v 2 = 0, and λ = ±i √ d 1 − 1, we satisfy Eqns. (4) and (5). Thus, ±i √ d 1 − 1 are eigenvalues of B, both with multiplicity n − r.
The remaining 4r eigenvalues of B determined by A come from nonzero eigenvalues of A. For each ξ 2 with ξ a nonzero eigenvalue of A, we will have precisely 4 complex solutions to Eqn. (6). Since there are 2r such eigenvalues, coming in pairs ±ξ, they determine a total of 4r eigenvalues of B, and the count is complete. To summarize the discussion above, we have the following Lemma: Lemma 2. Any eigenvalue of B belongs to one of the following categories: (1) ±1 are both eigenvalues with multiplicities |E| − |V | = nd 1 − m − n, (2) ±i √ d 1 − 1 are eigenvalues with multiplicities m − r, where r is the rank of the matrix X, (3) ±i √ d 2 − 1 are eigenvalues with multiplicities r, and (4) every pair of non-zero eigenvalues (−ξ, ξ) of A generates exactly 4 eigenvalues of B.

Main result
We spend the bulk of this paper in the proof of the following: Theorem 3. If B is the non-backtracking matrix of a bipartite, biregular random graph G ∼ G(n, m, d 1 , d 2 ), then its second largest eigenvalue |λ 2 (B)| ≤ ((d 1 − 1)(d 2 − 1)) 1/4 + n asymptotically almost surely, with n → 0 as n → ∞. Equivalently, there exists a sequence n → 0 as n → ∞ so that Remark. For the random lift model, Theorem 3 was proved by Bordenave [2015], which applies to random bipartite graphs only when d 1 = d 2 = d as discussed in section 1.2.
We combine Theorems 1 and 3 to prove our main result concerning the spectrum of A.
Remark. Since the first draft of this work came out, considerable advances haven been made regarding the question of singularity of random regular graphs. It was conjectured in Costello and Vu [2008] that, for 3 ≤ d ≤ n − 3, the adjacency matrix of uniform d-regular graphs is not singular with high probability as n grows to infinity. For directed d-regular graphs and growing d, this is now known to be true, following the results of Cook [2017] and Youssef [2016, 2017]. For constant degree d, Huang [2018a,b] proved the asymptotic non-singularity of the adjacency matrix for both undirected and directed d−regular graphs. The last case can be interpreted as singularity of the adjacency matrix of random d−regular bipartite graph. To the best of our knowledge, Theorem 4(iii) is the first result concerning the rank of rectangular random matrices with d 1 nonzero entries in each row and d 2 in each column.
Remark. The analysis of the Ihara-Bass formula for Markov matrices of bipartite biregular graph appeared before in Kempton [2016]. We have independently proven Lemma 2 and extracted from it more information than is given in Kempton [2016], including Theorem 4(iii).
Proof. Eqn. (7), describing those eigenvalues of B which are neither ±1 and do not correspond to 0 eigenvalues of A, is equivalent to where x = λ 2 , y = ξ 2 , α = d 1 − 1, and β = d 2 − 1. A simple discriminant calculation and analysis of Eqn. (8), keeping in mind that y = 0, leads to a number of cases in terms of y: roughly speaking, η is in the bulk, means that x is on the circle of radius √ αβ and the corresponding pair of eigenvalues λ are on a circle of means that x is real and negative, so λ is purely imaginary. In this case, one may also show that the smaller of the two possible values for x is increasing as a function of y and x − ∈ (−α, − √ αβ]. The larger of the two values of x is decreasing and . Correspondingly, the largest in absolute value that λ could be in this case is ±iα 1/4 = ±i(d 1 − 1) 1/4 . Case 3: y ≥ ( √ α + √ β) 2 means that both solutions x ± are real, and the larger of the two is larger than √ αβ. Note that Eqn. 8 shows there is a continuous dependence between x and y, and consequently between ξ and λ. Putting these cases together with Lemma 2, a few things become apparent: of the bulk, with δ small if is small since the dependence of δ on can be deduced from Eqn. 8.
In Fig. 3, we depict the spectra of A and B for a sample graph G ∼ G(120, 280, 7, 3). Looking at the non-backtracking spectrum, we observe the two leading eigenvalues ± (d 1 − 1)(d 2 − 1) (blue crosses) outside the circle of radius ((d 1 − 1)(d 2 − 1)) 1/4 along with a number of zero eigenvalues (black dots). There are also multiple purely imaginary eigenvalues which can arise from |ξ| ∈ (0, as well as ξ = 0. However, due to Theorem 4, only the smaller of i √ d 1 − 1 and i √ d 2 − 1 is observed with non-negligible probability, implying that X has rank r = n with high probability (shown as blue stars). Furthermore, we observe two pairs of real eigenvalues of B which are connected to a pair of eigenvalues of A from "above" the bulk, as well as two pairs of imaginary eigenvalues of B which are connected to a pair of eigenvalues of A from "below" the bulk.

Preliminaries
We describe the standard configuration model for constructing such graphs. We then define the "tangle-free" property of random graphs. Since small enough neighborhoods are tangle-free with high probability, we only need to count tangle-free paths when we eventually employ the trace method.
4.1. The configuration model. The configuration or permutation model is a practical procedure to sample random graphs with a given degree distribution. Let us recall its definition for bipartite biregular graphs. Let V 1 = {v 1 , v 2 , . . . , v n } and V 2 = {w 1 , w 2 , . . . , w m } be the vertices of the graph. We define the set of half edges out of V 1 to be the collection of ordered pairs and analogously the set of half edges out of V 2 : To sample a graph, we choose a random permutation π of [nd 1 ]. We put an edge between v i and w j in the G whenever For specific half edges e = (v i , j) and f = (w s , t), we use the notation π(e) = f as shorthand for π((i − 1)d 1 + j) = (s − 1)d 2 + t and say that "e matches to f ." The graph obtained may not be simple, since multiple half edges may be matched between any pair of vertices. However, conditioning on a simple graph outcome, the distribution is uniform in the set of all simple bipartite biregular graphs. Furthermore, for fixed d 1 , d 2 and n, m → ∞, the probability of getting a simple graph is bounded away from zero [Bollobás, 2001].
Consider the random B ∈ R 2|E|×2|E| whose first |E| rows are indexed by the elements of E 1 and the last |E| rows are indexed by those of E 2 , in lexicographic order. Columns are indexed in the same way. Entry B ef with e = (v i , j) ∈ E 1 and f = (w s , t) ∈ E 2 is defined as This defines the upper half of B. We define the lower half similarly, by putting This is the same definition used in Bordenave [2015]. In words, it says that the directed edge given by e followed by the directed edge given by f are connected by some half edge f = (w s , t ), and the path they form does not backtrack. This is therefore the same matrix introduced in Section 2, ordered according to the half edges. Notice that the randomness comes from the matching only. We consider two symmetric matrices M = M (π) and N , indexed the same as B, and defined by: We see that a term like M eg N gf corresponds to M matching the directed edge e to g by π, and N taking us out of the vertex of g along the directed edge f , which is different from g. Thus the rule of matrix multiplication means that This equality will be useful in Section 5.2 when working with products of the matrix B.

4.2.
Tangle-free paths. Sparse random graphs, including bipartite graphs, have the important property of being "tree-like" in the neighborhood of a typical vertex. Formally, consider a vertex v ∈ V 1 ∪ V 2 . For a natural number , we define the ball of radius centered at v to be: The next lemma says that most bipartite biregular graphs are -tangle-free up to logarithmic sized neighborhoods.
Proof. This is essentially the proof given in Lubetzky and Sly [2010], Lemma 2.1. Fix a vertex v. We will use the so called exploration process to discover the ball B (v). More precisely, we order the set E 1 lexicographically: The exploration process reveals π one edge at the time, by doing the following: • A uniform element is chosen from E 2 and it is declared equal to π(1).
We use the final π to output a graph as we did in the configuration model. The law of these graphs is the same. With the exploration process, we expose first the neighbors of v, then the neighbors of these vertices, and so on. This breadth-first search reveals all vertices in B k (v) before any vertices in B j>k (v). Note that, although our bound is for the family G(n, m, d 1 , d 2 ), the neighborhood sizes are bounded above by those of the d-regular graph with d = max(d 1 , d 2 ). Consider the matching of half edges attached to vertices in the ball B i (v) at depth i (thus revealing vertices at depth i + 1). In this process, we match a maximum m i ≤ d i+1 pairs of half edges total. Let F i,k be the filtration generated by matching up to the kth half edge in B i (v), for 1 ≤ k ≤ m i . Denote by A i,k the event that the kth matching creates a cycle at the current depth. For this to happen, the matched vertex must have appeared among the k − 1 vertices already revealed at depth i + 1. The number of unmatched half edges is at least nd − 2d i+1 . We then have that: So, we can stochastically dominate the sum is -tangle-free has the bound: which follows using that = c log d n with c < 1/8. The Lemma follows by taking a union bound over all vertices.

Proof of Theorem 3
5.1. Outline. We are now prepared to explain the main result. To study the second largest eigenvalue of the non-backtracking matrix, we examine the spectral radius of the matrix obtained by subtracting off the dominant eigenspace. We use for this: Lemma 6 (Bordenave, Lelarge, and Massoulié [2015], Lemma 3). Let T and R be matrices such that Im(T ) ⊂ Ker(R), Im(T * ) ⊂ Ker(R). Then all eigenvalues λ of T + R that are not eigenvalues of T satisfy: Throughout the text, · is the spectral norm for matrices and 2 -norm for vectors. Recall that the leading eigenvalues of B, in magnitude, are λ 1 = (d 1 − 1)(d 2 − 1) and λ 2|E| = −λ 1 with corresponding eigenvectors 1 α and 1 −α . Applying Lemma 6 with T = It will be important later to have a more precise description of the set Ker(T ). It is not hard to check that In the last line, the vectors v, w and 1 are |E|−dimensional, and 1 is the vector of all ones.
In order to use Eqn. 10, we must bound B x for large powers and x ∈ Ker(T ). This amounts to counting the contributions of certain non-backtracking walks. We will use the tangle free property in order to only count -tangle-free walks. We break up B into two parts in Section 5.2, an "almost" centered matrixB and the remainder j R ,j , and we bound each term independently.
To compute these bounds, we need to count the contributions of many different non-backtracking walks. We will use the trace technique, so only circuits which return to the starting vertex will contribute. In Section 5.3, we compute the expected contribution of products of B along such circuits, employing a result from Bordenave [2015].
Section 5.4 covers the combinatorial component of the proof. The total contributions B x come from many non-backtracking circuits of different flavors, depending on their number of vertices, edges, cycles, etc. Each circuit is broken up into 2k segments of tangle-free walks of length . We need to compute not only the expectation along the circuit, but also upper-bound the number of circuits of each flavor. We introduce an injective encoding of such circuits that depends on the number of vertices, length of the circuit, and, crucially, the tree excess of the circuit. An important part of these calculations is to keep track of the imbalance between left and right vertices visited in the circuit, since this controls the powers of d 1 and d 2 in the result.
Finally, in Section 5.8 we put all of these ingredients together and use Markov's inequality to bound each matrix norm with high probability. We find that B contributes a factor that goes as ((d 1 − 1)(d 2 − 1)) /4 , whereas R ,j contributes only a factor of (d − 1) /n, up to polylogarithmic factors in n. Thus, the main contribution to the circuit counts comes from the mean and, in fact, comes from circuits which are exactly trees traversed forwards and backwards. Interestingly, this is analogous to what happens when using the trace method on random matrices of independent entries. is of length + 1 = 3 and made up of edges 3(i − 1) + 1 through 3i. The last edge of each γ i is the first edge of γ i+1 , and these are shown in purple. Every path γ i with i even follows the edges backwards due to the matrix transpose. However, this detail turns out not to make any difference since the underlying graph is undirected. Our example has no cycles in each segment for clarity, but, in general, each segment can have up to one cycle, and the overall circuit may be tangled.
In the proof, we are forced to consider tangled paths but which are built up of tangle-free components. This delicate issue was first made clear by Friedman [2004] who introduced the idea of tangles and a "selective trace." Bordenave, Lelarge, and Massoulié [2015], who we follow closely in this part of our analysis, also has a good discussion of these issues and their history. We use the fact that and so deal with circuits built up of 2k segments which are -tangle-free. Notice that the first segment comes fromB , the second from (B ) * , etc. Because of this, the directionality of the edges along each segment alternates. See Figure 4 for an illustration of a path which contributes for k = 2 and = 2. Also, while each segment is -tangle-free, the overall circuit may be tangled.

5.2.
Matrix decomposition. We start this section by defining the set of paths that will be relevant to bound the norm of B . We closely follow Bordenave [2015].
Definition 2. Define Γ ef to be the set of all non-backtracking paths of 2 + 1 half edges, starting at e and ending at f . A path in this set will be denoted by γ = (e 1 , e 2 , . . . , e 2 , e 2 +1 ), where e 1 = e and e 2 +1 = f . The non-backtracking property means that, for all 1 ≤ i ≤ , e 2i and e 2i+1 share the same vertex but e 2i = e 2i+1 . Similarly, let Γ = e,f Γ ef .
Each path in Γ ef uses 2 +1 half edges, corresponding to +1 edges in the graph. To be clear, the above definition counts all possible non-backtracking sequences of half edges. These are different than the usual non-backtracking paths and do not necessarily exist in the graph. Some of these paths might backtrack along a duplicate edge which utilizes a different half edge.
We now have where we used Eqn. 9 and the fact that Γ ef is non-backtracking, so N e 2t e 2t+1 = 1.
Recall that we will use Eqn. (10) and Lemma 6 to bound λ 2 . Denote byB the matrix with entries equal toB = B − S, where Note thatB is an almost centered version of B, and Ker(S) = Ker(T ) = span(1 α , 1 −α ), where T is the matrix from Lemma 6. To apply the lemma, we wish to get an expression like Eqn. (12) for B . To do so, we write: the above matrix equation in the unknown S can be solved by simple manipulations. We get Using again that N is identically one over the elements of the set Γ ef , we find a similar formula to Eqn. (12): The following telescoping sum formula is a simple algebraic manipulation and appears in Massoulie [2013] and Bordenave, Lelarge, and Massoulié [2015]: Using this, with x s = B e 2s−1 e 2s+1 and y s =B e 2s−1 e 2s+1 , we obtain the following relation: This decomposition breaks the elements in Γ ef into two subpaths, also non-backtracking, of length j and − j, respectively.
Definition 3. Let F ef ⊂ Γ ef denote the subset of paths which are tangle-free, with F = e,f F ef .
We will take the parameter to be small enough so that the path γ is tangle-free with high probability. Thus the sums in Eqns. (12) or (13) need only be over the paths γ ∈ F ef . However, to recover the matrices B andB by rearranging Eqn. (14), we need to also count those tangle-free subpaths that arise from splitting tangled paths. While breaking a tangle-free path will necessarily give us two new tangle-free subpaths, the converse is not always true. This extra term generates a remainder that we define now.
Adding and subtracting j=1 R ,j ef to Eqn. (14) and rearranging the sums, we obtain where the matrices B ( ) andB ( ) are tangle-free versions of B andB , i.e. element ef in both matrices only counts paths γ ∈ F ef . Multiplying Eqn. (16) on the right by x ∈ Ker(T ) and using that B ( −j) x is also within Ker(S), since it is just the space spanned by the leading eigenvectors, we find that the middle term is identically zero. Thus for x ∈ Ker(T ),

Expectation bounds.
Our goal is to find a bound on the expectation of certain random variables which are products ofB ef along a circuit. To do this, we will need to bound the probabilities of different subgraphs when exploring G. This requires us to introduce the concept of consistent edges and their multiplicity.
• The multiplicity of a half edge e ∈ E(γ) is m γ (e) = 2k Lemma 7. Let γ = (e 1 , . . . , e 2k ) be a sequence of half edges of even length, with M andM the matching matrix and its centered version generated by a uniform matching in the configuration model. Then for 1 ≤ k ≤ |E| and 0 ≤ t 0 ≤ k we have that where b = number of inconsistent edges of multiplicity one occuring before t 0 , E 1 = number of consistent edges with multiplicity one occuring before t 0 , E = |E(γ)|, and C is a universal constant.
Proof. Recall the form of the matrices Matrix M 1 ∈ R |E|×|E| is a random permutation matrix between nd 1 = |E| and md 2 = |E| half edges. Therefore, M 1 is distributed exactly the same as a matching matrix of a random |E|-lift of a single edge, and the same holds for its centered version M 1 − 1 |E| 11 * . The only paths γ that contribute in this bipartite setting must alternate between the bipartite sets and avoid the 0 blocks, otherwise the bound holds trivially. For one of these paths γ assume, without loss of generality, that the path starts in set V 1 . Then define the transformed path γ = (e 1 , . . . , e 2k ) = (e 1 , e 2 , e 4 , e 3 , e 5 , . . .), i.e. with every other pair in γ in reverse order. Note that Then the Lemma holds by Bordenave [2015], Proposition 28. 5.4. Path counting. This section is devoted to counting the number of ways non-backtracking walks can be concatenated to obtain a circuit as in Section 5.2. We will follow closely the combinatorial analysis used in Brito, Dumitriu, Ganguly, Hoffman, and Tran [2016]. In that paper, the authors needed a similar count for self-avoiding walks. We make the necessary adjustments to our current scenario.
Our goal is to find a reasonable bound for the number of circuits which contribute to the trace bound, Eqn. (11) and shown graphically in Figure 4. Define C R V,E as those circuits which visit exactly V = |V (γ)| different vertices, R = |V (γ) ∩ V 2 | of them in the right set, and E = |E(γ)| different edges. Note, these are undirected edges in E(G). This is a set of circuits of length 2k obtained as the concatenation of 2k non-backtracking, tangle-free walks of length . We denote such a circuit as γ = (γ 1 , γ 2 , · · · , γ 2k ), where each γ j is a length walk.
To bound C R V,E = |C R V,E |, we will first choose the set of vertices and order them. The circuits which contribute are indeed directed non-backtracking walks. However, by considering undirected walks along a fixed ordering of vertices, that ordering sets the orientation of the first and thus the rest of the directed edges in γ. Thus, we are counting the directed walks which contribute to Eqn. (11). We relabel the vertices as 1, 2, . . . , V as they appear in γ. Denote by T γ the spanning tree of those edges leading to new vertices as induced by the path γ. The enumeration of the vertices tells us how we traverse the circuit and thus defines T γ uniquely.
We encode each walk γ j by dividing it into sequences of subpaths of three types, which in our convention must always occur as type 1 → type 2 → type 3, although some may be empty subpaths. Each type of subpath is encoded with a number, and we use the encoding to upper bound the number of such paths that can occur. Given our current position on the circuit, i.e. the label of the current vertex, and the subtree of T γ already discovered (over the whole circuit γ not just the current walk γ j ), we define the types and their encodings: Type 1: These are paths with the property that all of their edges are edges of T γ and have been traversed already in the circuit. These paths can be encoded by their end vertex. Because this is a path contained in a tree, there is a unique path connecting its initial and final vertex. We use 0 if the path is empty. Type 2: These are paths with all of their edges in T γ but which are traversed for the first time in the circuit. We can encode these paths by their length, since they are traversing new edges, and we know in what order the vertices are discovered. We use 0 if the path is empty. Figure 5. Encoding an -tangle-free walk, in this case the first walk in the circuit γ 1 , when it contains a cycle. The vertices and edges are labeled in the order of their traversal. The segments γ a , γ b , and γ c occur on edges numbered (1, 2, 3); (4 + 6i, 5 + 6i, 6 + 6i, 7 + 6i, 8 + 6i, 9 + 6i) for i = 0, 1, . . . c; and (10 + 6c), respectively. The encoding is (0, 3, 0)|(0, 4, 3)(4, 0, 0) (0, 1, 0). Suppose c = 1. Then = 22 and the encoding is of length 3 + (4 + 1 + 1)(c + 1) + 1, we can back out c to find that the cycle is repeated twice. The encodings become more complicated later in the circuit as vertices see repeat visits.
Type 3: These paths are simply a single edge, not belonging to T γ , that connects the end of a path of type 1 or 2 to a vertex that has been already discovered. Given our position on the circuit, we can encode an edge by its final vertex. Again, we use 0 if the path is empty. Now, we decompose γ j into an ordered sequence of triples to encode its subpaths: (p 1 , q 1 , r 1 )(p 2 , q 2 , r 2 ) · · · (p t , q t , r t ), where each p i characterizes subpaths of type 1, q i characterizes subpaths of type 2, and r i characterizes subpaths of type 3. These subpaths occur in the order given by the triples. We perform this decomposition using the minimal possible number of triples. Now, p i and r i are both numbers in {0, 1, ..., V}, since our cycle has V vertices. On the other hand, q i ∈ {0, 1, ..., } since it represents the length of a subpath of a non-backtracking walk of length . Hence, there are (V + 1) 2 ( + 1) possible triples. Next, we want to bound how many of these triples occur in γ j . We will use the following lemma.
Lemma 8. Let (p 1 , q 1 , r 1 )(p 2 , q 2 , r 2 ) · · · (p t , q t , r t ) be a minimal encoding of a non backtracking walk γ j , as described above. Then r i = 0 can only occur in the last triple i = t.
Proof. We can check this case by case. Assume that for some i < t we have (p i , q i , 0), and consider the concatenation with (p i+1 , q i+1 , r i+1 ). First, notice that both p i+1 and q i+1 cannot be zero since then we will have (p i , q i , 0)(0, 0, v * ) which can be written as (p i , q i , v * ). If q i = 0, then we must have p i+1 = 0. Otherwise, we split a path of new edges (type 2), and the decomposition is not minimal. This implies that we visit new edges and move to edges already visited, hence we need to go through a type 3 edge, implying that r i = 0. Finally, if p i = 0 and q i = 0, then we must have p i+1 = 0; otherwise, we split a path of old edges (type 1). We also require q i+1 = 0, but (p i , 0, 0)(0, q i+1 , r i+1 ) is the same as (p i , q i+1 , r i+1 ), which contradicts the minimality condition. This covers all possibilities and finishes the proof.
Using the lemma, any encoding of a non-backtracking walk γ j has at most one triple with r i = 0. All other triples indicate the traversing of a type 3 edge. We now give a very rough upper bound for how many of such encodings there can be. To do so, we will use the tangle-free property and slightly modify the encoding of the paths with cycles. Consider the two cases: Case 1: Path γ j contains no cycle. This implies that we traverse each edge within γ j once. Thus, we can have at most χ = E − V + 1 many triples with r i = 0. This gives a total of at most (V + 1) 2 ( + 1) χ+1 many ways to encode one of these paths. Case 2: Path γ j contains a cycle. Since we are dealing with non-backtracking, tangle-free walks, we enter the cycle once, loop around some number of times, and never come back. We change the encoding of such paths slightly. Let γ a j , γ b j , and γ c j be the segments of the path before, during, and after the cycle. We mark the start of the cycle with | and its end with . The new encoding of the path is: , where we encode the segments separately. Observe that each a subpath is connected and self-avoiding. The above encoding tells us all we need to traverse γ j , including how many times to loop around the cycle: since the total length is , we can back out the number of circuits around the cycle from the lengths of γ a j , γ b j , and γ c j . See Figure 5. Following the analysis made for Case 1, the subpaths γ a j , γ b j , γ c j are encoded by at most χ + 1 triples, but we also have at most choices each for our marks | and . We are left with at most 2 (V + 1) 2 ( + 1) χ+1 ways to encode any path of this kind.
Together, these two cases mean there are less than 2 2 (V + 1) 2 ( + 1) χ+1 such paths. Now we conclude by encoding the entire circuit γ = (γ 1 , . . . , γ 2k ). We first choose V vertices, R in the set V 2 , and order them, which can occur in (m) R (n) V−R ≤ m R n V−R different ways. Finally, in the whole path γ we are counting concatenations of 2k paths which are -tangle-free. Therefore, we conclude with the following Lemma: Lemma 9. Let C R V,E be the set of circuits γ = (γ 1 , . . . , γ 2k ) of length 2k obtained as the concatenation of 2k non-backtracking, tangle-free walks of length , i.e. γ s ∈ F for all s ∈ [2k], which visit exactly V = |V (γ)| different vertices, R = |V (γ) ∩ V 2 | of them in the right set, and E = |E(γ)| different edges. If C R V,E = |C R V,E |, then The circuits that contribute to the remainder term R ,j are slightly different. In this case, each length segment is an element of T ,j rather than F . We have to slightly modify the previous argument for this case.
Lemma 10. Let D R V,E be the set of circuits γ = (γ 1 , . . . , γ 2k ) of length 2k obtained as the concatenation of 2k elements γ s ∈ T ,j for s = 1, . . . , 2k, that visit exactly V vertices, R of which are in V 2 , and E different edges. Then for D R V,E = |D R V,E |, we have Proof. Since each γ s = (e 1 , . . . , e 2 +1 ) ∈ T ,j , we have that γ = (e 1 , . . . , e 2j−1 ) ∈ F j−1 , γ = (e 2j−1 , e 2j , e 2j+1 ) ∈ F 1 , and γ = (e 2j+1 , . . . , e 2 +1 ) ∈ F −j . Encoding γ , γ , and γ as before, we have the generous upper bound of at most (2 )((V + 1) 2 ( + 1)) χ+1 3 many encodings for each γ s . Choosing and ordering the vertices, then concatenating 2k of these paths gives the final result. 5.5. Half edge isomorphism counting. We have constructed the circuits in C R V,E and D R V,E by choosing the vertices and edges that participate in them. However, the expectation bound applies to matchings of half-edges in the configuration model. Since there are multiple ways to configure the half edges into such a circuit, this must be taken into account in the combinatorics.
Lemma 11. Let I R V,E be the number of half edge choices for the graph induced by γ ∈ C R V,E ∪ D R V,E . Then, For every left vertex v, with degree g v on the graph induced by γ, the number of choices of half edges is d Note that the choice of half edges are independent for the left vertices. We then get that there are d V−R 1 (d 1 − 1) E−V+R many choices, where we used that the sum of all the degrees on one component of a bipartite graph equals the number of edges: E = v g v . Similarly, for right vertices we get d R 2 (d 2 − 1) (E−R) . Corollary 12. We have that

5.6.
Bounding the imbalance ψ. We focus now on the quantity defined as ψ = R − E/2. Informally, ψ captures the imbalance between the number of vertices on each partition of the bipartite graph visited by the circuit γ. We show that this imbalance is not too large.
Lemma 13. Let < 1 32 log d (n), then ψ ≤ 16k 2 with high probability. Proof. For any subgraph H define ψ(H) = R(H) − E(H)/2. We set ψ = ψ(γ). To bound this quantity, we analyse the subgraph γ ≤i , obtained by the concatenation of the first i walks in γ, i.e. the union of the graphs induced by γ 1 , . . . , γ i . Our choice of implies that every neighborhood of radius 4 is tangle-free with high probability. Hence, every non-backtracking walk γ j is either a path or a path with exactly one loop. It is not hard to conclude that ψ(γ j ) ≤ 2 for all j and ψ(γ ≤1 ) = ψ(γ 1 ) ≤ 2. We now proceed inductively to add walks to our graph, one by one, as they appear on the circuit. We will upper bound the increment ψ(γ ≤i+1 ) − ψ(γ ≤i ) by looking at how the addition of γ i+1 changes the imbalance.
To analyse this, consider the intersection of γ i+1 and each γ j , 1 ≤ j ≤ i. Notice that ψ may increase only if there are vertices at which the two walks split apart. We claim that there are at most two such vertices. Assume that at v 1 , v 2 and v 3 the two walks split. Then there are two disjoint cycles in the union of γ i+1 and γ j , obtained by following each first from v 1 to v 2 and then from v 2 to v 3 . But this is a contradiction, since the diameter of this union is less than 2 < 1 8 log(n), which implies that their union is tangle-free. We conclude that ψ(γ i+1 ∪γ j ) ≤ 8 since there are at most two splits and each split contributes at most four to the imbalance. Then ψ(γ ≤i+1 ) ≤ ψ(γ ≤i ) + 8i + 2, which implies that ψ(γ) ≤ 16k 2 , as desired. 5.7. Bounding the inconsistent edges. We will need a bound on the number of inconsistent edges of multiplicity one, which we get in the following lemma. Recall Definition 5, which introduced inconsistent edges.

Proof.
Let {e, f } be an inconsistent edge of multiplicity one, where e and f are its half edges. For inconsistency and without loss of generality, there must exist another edge {e, f } in γ, so that m γ (e) = 1. We may assume that {e, f } is traversed before {e, f }. Let v be the vertex of e and consider the two possible scenarios: Case 1: There is no cycle containing v in γ. Then the edge {e, f } may only be inconsistent if v is visited at the end of one of the 2k non-backtracking walks γ i and {e, f } is at the beginning of γ i+1 . Hence, in this case we have at most two inconsistent edges of multiplicity one. This yields at most 4k such edges. Case 2: There is a cycle passing through v. For each such cycle there is an edge that does not belong to the tree T γ (defined in Section 5.4). Furthermore, each cycle creates at most four inconsistent edges. Combining these two facts we get at most 4χ and the proof follows.
Proof. The argument is similar to the above; however, now there are 4k segments inγ = 2k s=1 (γ s ∪ γ s ), counting γ s and γ s separately. As above, each of these 4k segments may yield at most 2 inconsistent edges. Furthermore, the graph induced byγ may not be connected; let C be the number of connected components. Each edge that creates a cycle may yield at most 4 inconsistent edges, and there are at most E − V + C non-tree edges. Then, we have that b D ≤ 8k + 4(E − V + C) ≤ 8k + 4(E − V + 2k) ≤ 16k + 4χ, as claimed.
5.8. Bounds on the norm ofB and R ,j . All of the ingredients are gathered to bound the matrix norms.
Proof. The following holds for any natural number k, but for our proof, we will take (23) k = log(n) 1/3 and = c log n for some c < 1 32 .
We have The sum is taken over the set C of all circuits γ of length 2k , where γ = (γ 1 , γ 2 , . . . , γ 2k ) is formed by concatenation of 2k tangle-free segments γ s ∈ F , with the convention e s+1 1 = e s +1 . Again, refer to Figure 4 for clarification.
As in Section 5.4, we will break these into circuits which visit exactly V = |V (γ)| different vertices, R = |V (γ) ∩ V 2 | of them in the right set, and E = |E(γ)| different edges. We define three disjoint sets of circuits: C 1 = {γ ∈ C : all edges in γ are traversed at least twice}, C 2 = {γ ∈ C : at least one edge in γ is traversed exactly once and V ≤ kl + 1}, and C 3 = {γ ∈ C : at least one edge in γ is traversed exactly once and V > kl + 1}.
Define the quantities for j = 1, 2 and 3, so that (24) can be bounded as We will bound each term on the right hand side above. The reason for this division is that, by Theorem 7, when we have any two-path traversed exactly once, the expectation of the corresponding circuit is smaller, because the matrixB is nearly centered. We will see that the leading order terms in Eqn. (24) will come from circuits in C 1 . From Lemmas 7 and 9 and Corollary 12, we get that We use C, c 0 , c 1 , c 2 , c 3 , c 4 to denote constant terms and set α = ((d 1 − 1)(d 2 − 1)) 1/4 . In the last line we used Lemma 13 and Lemma 14 to bound ψ and b C in terms of k and χ and remove the sum over R, which contains at most V terms. We will use Eqn. 26 to bound each I j .
Finally, the first summand is maximized for V = k + 1 and there are at k + 1 many terms in that sum. Therefore, modifying the constant c 0 yields I 1 ≤ c 0 n 4k c k 2 1 c k 2 (k + 1) 2 (k + 2) 2 ( + 1) 2k α 2k . Here there is at least one edge traversed exactly once, so we have E ≥ V for γ ∈ C 2 . Taking E 1 = 0 only increases the right hand side on Eqn. (26); it becomes α 2 ((V + 1) 2 ( + 1)) 2k c 3 n χ Notice that this last term is almost identical to the one in the bound of I 1 , except that now we start the second sum at χ = 1, which leads to an extra factor of O(((V + 1) 2 ( + 1)) 2k /n). This allow us to factor out another geometric series and proceed as we did for I 1 . This yields since there are k + 1 terms in the first sum. 5.8.3. Bounding I 3 . This set will require more delicate treatment, since circuits in C 3 visit potentially many vertices and edges, yet we need to keep the power of α at most 2k .
We first show that, in this case, E 1 is also large. We have E ≥ V, and let V = k + t. Define E 1 as the number of edges traversed once in γ, so that E 1 = b+E 1 . Since γ has length 2k , we deduce that 2(E − E 1 ) + E 1 ≤ 2k , which implies that E 1 ≥ 2t. Finally, Lemma 14 yields E 1 ≥ (2t − 4(χ + k)) + .

Eqn. (26) then gives,
To simplify our notation, we will write F (k, ) = c 0 n 4k c k 2 1 c k 2 α 2(k −1) (2k )((2k + 1) 2 ( + 1)) 2k . Observe now that we can write: where c(k, ) = c 3 α 2 ((2k + 1) 2 ( + 1)) 2k . To bound the double sum on the right hand side above, we start by removing a factor of c(k, ) n , whichs leaves (29) The n in the denominator is crucial to cancel the linear term in F (k, ), keeping the upper bound for I 3 small. We focus on bounding the double sum. We split the sum in t in two parts. Case 1: t < 2k + 2. For these values of t, we have (t − 2k − 2 − 2χ) + = 0, hence where the last equality uses again the same geometric upper bound for the sum over χ. Case 2: t ≥ 2k + 2. We split the second sum, from χ = 0 to N = t/2 − k − 1 and the terms with χ > N and analyse the two separately. The first can be upper bounded by The last inequality can be checked in two steps: We first factor out the power of α 4k+4 , and then use that c(k, ) n ≤ c 4 k √ n , which holds for large enough n, to simplify the second sum to the addition of N + 1 equal terms. To bound the right hand side, we one more time upper bound by a geometric series of ratio less than one to get We are left with the terms χ > N . In this case we get (t − 2k − 2 − 2χ) + = 0, so k t=2k+2 α 2t The sum over χ is of the order of c(k, ) n N +1 ≤ c(k, ) n t/2−k−1 . Substituting this into the above, we are left with k t=2k+2 α 2t for some universal constant C. After factoring α 4k+4 and changing variables in the summation, we conclude that Using (29) and the results for case 1 (30) and case 2 (31 and 32), we conclude that 5.8.4. Finishing the proof of Theorem 16. We have bounded the three pieces we need to prove the theorem. From (27), (28), and (33), with n sufficiently large, we get E B ( ) 2k ≤ I 1 + I 2 + I 3 ≤ α 2k n · Cc k 2 1 c k 5 4k (k ) 4 k 2 3 4k ≤ α 2k n · C c (log n) 2/3 1 c (log n) 1/3 6 (log n) 4(log n) 1/3 (log n) 16/3 ((log n) 11/3 ) 4(log n) 1/3 ≤ α 2k n · C c (log n) 2/3 1 c (log n) 1/3 6 (log n) 20(log n) 1/3 +6 where c 5 , c 6 , C, C , C are universal constants. Take any > 0. It can be checked that (log n) 20(log n) 1/3 +6 = o(n ), and f (n) = o(n ) as well. Let g(n) = exp((log n) 3/4 ); then g(n) = o(n ) but g(n) 2k n . We apply Markov's inequality, so that which is the statement of the theorem.

5.9.
Proof of the main result, Theorem 3. We will again take = c log n , with c chosen so that the graph is -tangle-free with high probability. By Eqns. (10) and (17), Notice that (d − 1) ≤ (d − 1) c log n ≤ n c log d , so take c < min 1 32 , 1 log d . Then (d − 1) = O(n ) for some 0 < < 1, and recall that exp((log n) 3/4 ) = o(n ) for any > 0. We apply Theorems 16 and 17 to get where n → 0 as n → ∞.

Application: Community detection
In many cases, such as online networks, we would like to be able to recover specific communities in those graphs. In the typical setup, a community is a set of vertices that are more densely connected together than to the rest of the graph.
The model we present here is inspired by the planted partition or stochastic blockmodel (SBM, Holland, Laskey, and Leinhardt [1983]). In the SBM, each vertex belongs to a class or community, and the probability that two vertices are connected is a function of the classes of the vertices. It is a generalization of the Erdős-Rényi random graph. The classes or blocks in the SBM make it a good model for graphs with community structure, where nodes preferentially connect to other nodes depending on their communities (Newman [2010]).
There are many methods for detecting a community given a graph. For an overview of the topic, see Fortunato [2010]. Spectral clustering is a common method which can be applied to any set of data {ζ i } n i=1 . Given a symmetric and non-negative similarity function S, the similarity is computed for every pair of data points, forming a matrix A ij = S(ζ i , ζ j ) = S(ζ j , ζ i ) ≥ 0. The spectral clustering technique is to compute the leading eigenvectors of A, or matrices related to it, and use the eigenvectors to cluster the data. In our case, the matrix in question is just the Markov matrix of a graph, defined soon. We will show that we can guarantee the success of the technique if the degrees are large enough.
Our graph model is a regular version of the SBM. We build it on a "frame," which is a small, weighted graph that defines the community structure present in the larger, random graph. Each class is represented by a vertex in the frame. The edge weights in the frame define the number of edges between classes. What makes our model differ from the SBM is that the connections between classes are described by a regular random graph rather than an Erdős-Rényi random graph. However, the graph itself is not necessarily regular.
A number of authors have studied similar models. Our model is a generalization of a random lift of the frame, which is said to cover the random graph (e.g. Marcus, Spielman, and Srivastava [2013b], Angel, Friedman, and Hoory [2015], Bordenave, Lelarge, and Massoulié [2015]). This type of random graph was also studied by Newman and Martin [2014], who called it an equitable random graph, since the community structure is equivalent to an equitable partition. This partition induces a number of symmetries across vertices in each community which are useful when studying the eigenvalues of the graph. Barrett, Francis, and Webb [2017] studied the effect of these symmetries from a group theory standpoint. The work of Barucca [2017] is closest to ours: they consider spectral properties of such graphs and their implications for spectral community clustering. In particular, they show that the spectrum of what we call the "frame" (in their words, the discrete spectrum, which is deterministic) is contained in that of the random graph. They use the resolvent method (called the cavity method in the physics community) to analyze the continuous part of the spectrum in the limit of large graph size, and argue that community detection is possible when the deterministic frame eigenvalues all lie outside the bulk. However, this analysis assumes that there are no stochastic eigenvalues outside the bulk, which will only hold with high probability if the graph is Ramanujan. Our analysis shows that, if a set of pairwise spectral gaps hold between all communities, then this will be the case.
6.1. The frame model. We define the random regular frame graph distribution G(n, H) as a distribution of simple graphs on n vertices parametrized by the "frame" H. The frame H = (V, E, p, D) is a weighted, directed graph. Here, V is the vertex set, E ⊆ {(i, j) : i, j ∈ V } is the directed edge set, the vertex weights are p, and the edge weights are D. Note that we drop the arrows on the edge set in this Section, since it will always be directed. The vertex weight vector p ∈ R |V | , where i∈V p i = 1, sets the relative sizes of the classes. The edge weights are a matrix of degrees D ∈ N |V |×|V | . These assign the number of edges between each class in the random graph: D ij is the number of edges from each vertex in class i to vertices in class j. The degrees must satisfy the balance condition for all i, j ∈ V where (i, j) or (j, i) are in E. This requires that, for every edge e ∈ E, its reverse orientation also exists in H. We also require that n i = np i ∈ N for every i ∈ V , so that the number of vertices in each type is integer. Given the frame H, a random regular frame graph G ∼ G(n, H) is a simple graph on n vertices with n i vertices in class i. It is chosen uniformly among graphs with the constraint that each vertex in class i makes D ij connections among the vertices in class j. In other words, if i = j, we sample that block of the adjacency matrix as the adjacency matrix of a D ii -regular random graph on n i vertices. For off-diagonal blocks i = j, these are sampled as bipartite, biregular random graphs G(n i , n j , D ij , D ji ).
Sampling from G(n, H) can be performed similar to the configuration model, where each node is assigned as many half-edges as its degree, and these are wired together with a random matching (Newman [2010]). The detailed balance condition Eqn. (37) ensures that this matching is possible. Practically, we often have to generate many candidate matchings before the resulting graph is simple, but the probability of a simple graph is bounded away from zero for fixed D.
An example of a random regular frame graph is the bipartite, biregular random graph. The family G(n, m, d 1 , d 2 ) is a random regular frame graph G(n + m, H), where the frame H is the  Figure 6A. We see that this generates a random tripartite graph with regular degrees between vertices in different independent sets, shown in Figure 6B.
6.2. Markov and related matrices of frame graphs. Now, we define a number of matrices associated with the frame and the sample of the random regular frame graph.
Let G be a simple graph. Define D G = diag(d G ), the diagonal matrix of degrees in G. The Markov matrix P = P (G) is defined as is the adjacency matrix. The Markov matrix is the row-normalized adjacency matrix, and it contains the transition probabilities of a random walker on the graph G. Let L = I − L = D −1/2 G A D −1/2 G be a matrix simply related to the normalized Laplacian. We call this the symmetrized Markov matrix. Then P and L have the same eigenvalues, but L is symmetric, Suppose G ∼ G(n, H), where the frame H = (V, E, p, D). Another matrix that will be useful is what we call the Markov matrix of the frame R, where R ij = D ij j D ij . Thus, R is a row-normalized D, in the same way that the Markov matrix P is the row-normalized adjacency matrix A. Furthermore, R is invariant under any uniform scaling of the degrees. Because of this equitable partition property of random regular frame graphs, eigenvectors of the frame matrices D = D(H) or R = R(H) lift to eigenvectors of A = A(G) or P = P (G), respectively. Suppose Dx = λx, then it is a straightforward exercise to check that Ax = λx for the piecewise constant vector Using the same procedure, we can lift any eigenpair of R to an eigenpair of P with the same eigenvalue.
6.2.1. Bounds on the eigenvalues of frame graphs in terms of blocks. The following result is due to Wan and Meilȃ [2015]: The spectrum of the Markov matrix σ(P ) enjoys a simple connection to σ(A) when A is the adjacency matrix of a graph drawn from G(n, m, d 1 , d 2 ). In this case, , so the eigenvalues of P are just the scaled eigenvalues of A. This and the spectral gap for bipartite, biregular random graphs, Theorem 4, lead to the following remark: Remark. For a random regular frame graph, M (kl) corresponds to the symmetrized Markov matrix L of a bipartite biregular graph G(n k , n l , D kl , D lk ). Thus, Suppose we are given a frame that fits the conditions of Proposition 18; namely, D cannot have any zero eigenvalues. Then we can uniformly grow the degrees, which leaves R invariant, but allows us to reach an arbitrarily small C. This ensures that the leading K eigenvalues of P are equal to the eigenvalues of R. Note that this actually means that the entire random regular frame graph satifsfies a weak Ramanujan property. We now show that this guarantees spectral clustering. 6.3. Spectral clustering. Spectral clustering is a popular method of community detection. Because some eigenvectors of P , the Markov matrix of a random regular frame graph, are piecewise constant on classes, we can use them to recover the communities so long as those eigenvectors can be identified. Suppose there are K total classes in our random regular frame graph. Then, given the eigenvectors x 1 , x 2 , . . . , x K , which are piecewise constant across classes, we can cluster vertices by class. For each vertex v ∈ V (G), associate the vector y v ∈ R K where y v j = x j v . Then if y v = y u for u, v ∈ V (G), vertices u and v belong to the same class 1 . It is simple to recover these piecewise constant vectors x 1 , x 2 , . . . , x K when they are the leading eigenvectors. These facts lead to the following theorem: Theorem 19 (Spectral clustering guarantee in frame graphs). Let G be a random regular frame graph G(n, H) and P its Markov matrix. Let R be the Markov matrix of the frame H = (V, E, p, D), with |V (H)| = K classes and λ 1 ≥ . . . ≥ λ K the eigenvalues of R and |λ K | > 0. Then we can scale the degrees by some κ ∈ N, D → κD, so that the vertex classes are recoverable by spectral clustering of the leading K eigenvectors of P .
Remark. The conditions of Theorem 19, while very general, are also weaker than may be expected using more sophisticated methods tailored to the specific frame model. We illustrate this with the following example.
6.3.1. Example: The regular stochastic block model. Brito, Dumitriu, Ganguly, Hoffman, and Tran [2016] and Barucca [2017] studied a regular stochastic block model, which can be seen as a special case of our frame model. Let the frame H be the complete directed graph on two vertices, including self loops, where D = d 1 d 2 d 2 d 1 and p = (1/2, 1/2). Define the regular stochastic block model as G(2n, H). This is a graph with two classes of equal size, representing two communities of vertices, with within-class degree d 1 and between-class degree d 2 . We assume d 1 > d 2 , since communities are more strongly connected within. Brito, Dumitriu, Ganguly, Hoffman, and Tran [2016] proved the following theorem: Theorem 20. If (d 1 − d 2 ) 2 > 4(d 1 + d 2 − 1), then there is an efficient algorithm for strong recovery, i.e. recovery of the exact communities with high probability as n → ∞.
Theorem 20 gives a sharp bound on the degrees for recovery, which we can compare to our spectral clustering results. The eigenvalues of D are d 1 + d 2 and d 1 − d 2 , and the Markov matrix of the frame R has eigenvalues 1 and (d 1 − d 2 )/(d 1 + d 2 ). The diagonal blocks L (11) and L (22) each correspond to the Laplacian matrix of a d 1 -regular random graph on n vertices, whereas the off-diagonal block term M (12) corresponds to the Laplacian of a d 2 -regular bipartite graph on 2n vertices. Using our results and the previously known results for regular random graphs (Friedman [2003(Friedman [ , 2004, Bordenave, Lelarge, and Massoulié [2015]), we can pick some C > 2 √ d 2 − 1/d 2 since d 1 > d 2 and we will eventually take the degrees to be large. Using Proposition 18, we find that the spurious eigenvalues of P come after the leading 2 eigenvalues if to leading order in the degrees. Rearranging, we obtain the condition 1 In the SBM case, the eigenvectors are not piecewise constant, but they are aligned with the eigenvectors of R and thus highly correlated across vertices in the same class. A more flexible clustering method such as K-means must be applied to the vectors y in that case.
Assuming d 2 /d 1 = β < 1 fixed, and taking the limit d 1 , d 2 → ∞, we find that the result of Brito, Dumitriu, Ganguly, Hoffman, and Tran [2016] becomes whereas our result becomes illustrating that the spectral threshold is a factor of (1 + β)/β weaker.

Application: Low density parity check or expander codes
Another useful application of random graphs is as expanders, loosely defined as graphs where the neighborhood of a small set of nodes is large. Expander codes, also called low density parity check (LDPC) codes, were first introduced by Gallager in his PhD thesis (Gallager [1962]). These are a family of linear error correcting codes whose parity-check matrix is encoded in an expander graph. A linear code is a set C ⊂ Σ L , where a length L codeword x ∈ C if and only if Hx = 0. The alphabet Σ is typically a finite field and H ∈ Σ P ×L is the parity check matrix. In the simplest case, Σ = F 2 and each row of H can be interpreted as a parity constraint on codewords. The performance of such codes depends on how good an expander that graph is, which in turn can be shown to depend on the separation of eigenvalues. For a good introduction and overview of the subject, see Richardson and Urbanke [2008].
Following Tanner [1981], we construct a code C from a (d 1 , d 2 )-regular bipartite graph G on n + m vertices and two smaller linear codes C 1 and C 2 of length d 1 and d 2 , respectively. We write C 1 = [d 1 , k 1 , δ 1 ] and C 2 = [d 2 , k 2 , δ 2 ] with the usual convention of length, dimension, and minimum distance. We assume the codes are all binary, using the finite field F 2 the codeword is x ∈ C ⊂ F |E| 2 where |E| = nd 1 = md 2 . That is, we associate a bit to each edge in the graph bipartite graph G. Let (e i (v)) dv i=1 represent the set of edges incident to a vertex v in some arbitrary, fixed order. Then the vector x ∈ C if and only if the vectors (x e 1 (u) , x e 2 (u) , . . . , x e d 1 (u) ) T ∈ C 1 for all u ∈ V 1 and (x e 1 (v) , x e 2 (v) , . . . , x e d 2 (v) ) T ∈ C 2 for all v ∈ V 2 . The final code C is also linear. With this construction, the code C has rate at least k 1 /d 1 + k 2 /d 2 − 1 (Tanner [1981]).
Furthermore, Janwa and Lal [2003] proved the following bound on the minimum distance of the resulting code: Theorem 21. Suppose δ 1 ≥ δ 2 > η/2, where η is the second largest eigenvalue of the adjacency matrix of G. Then the code C has minimum distance Corollary 22. Suppose the code C is constructed from a biregular, bipartite random graph G ∼ G(n, m, d 1 , d 2 ) and the conditions of Theorem 21 hold. Then the minimum distance of C satisfies We see that these Tanner codes will have maximal distance for smallest η, and used our main result, Theorem 4, to obtain the explicit bound in Corollary 22. By growing the graph, the above shows a way to construct arbitrarily large codes whose minimum distance remains proportional to the code size nd 1 . That is, the relative distance δ/(nd 1 ) is bounded away from zero as n → ∞. However, the above bound will only be useful if it yields a positive result, which depends on the codes C 1 and C 2 as well as the degrees.
Remark. In general, the performance guarantees on LDPC codes that are obtainable from graph eigenvalues are weaker than those that come from other methods. Although our method does guarantee high distance for some high degree codes, analysis of specific decoding algorithms or a probabilistic expander analyses yield better bounds that work for lower degrees (Richardson and Urbanke [2008]). 7.1. Example: An unbalanced code based on a (14, 9)-regular bipartite graph. We illustrate the applicability of our distance bound with an example. Let C 1 = [14,8,7] and C 2 = [9, 4, 6]. These can be achieved by using a Reed-Salomon code on the common field F q for any q > 14 (Richardson and Urbanke [2008]). We take q = 2 4 = 16 for inputs that are actually binary, and this means each edge in the graph actually contains 4 bits of information. Employing Corollary 22, the Tanner code C will have relative minimum distance δ/(nd 1 ) ≥ 0.0014 and rate at least 0.016. Taking n = 216 and m = 336 gives the code a minimum distance of at least 4.

Application: Matrix completion
Assume we have some matrix Y ∈ R n×m which has low "complexity." Perhaps it is low-rank or simple by some other measure. If we observe Y ij for a limited set of entries (i, j) ∈ E ⊂ [n] × [m], then matrix completion is any method which constructs a matrixŶ so that Ŷ −Y is small, or even zero. Matrix completion has attracted significant attention in recent years as a tractable algorithm for making recommendations to users of online systems based on the tastes of other users (a.k.a. the Netflix problem). We can think of it as the matrix version of compressed sensing (Candès and Tao [2010], Candes and Plan [2010]).
Recently, a number of authors have studied the performance of matrix completion algorithms where the index set E is the edge set of a regular random graph (Heiman, Schechtman, and Shraibman [2014], Bhojanapalli and Jain [2014], Gamarnik, Li, and Zhang [2017]). Heiman, Schechtman, and Shraibman [2014] describe a deterministic method of matrix completion, where they can give performance guarantees for a fixed observation set E over many input matrices Y . The error of their reconstruction depends on the spectral gap of the graph. We expand upon the result of Heiman, Schechtman, and Shraibman [2014], extending it to rectangular matrix and improving their bounds in the process. 8.1. Matrix norms as measures of complexity and their relationships. We will employ a number of different matrix and vector norms in this Section. These are all related by the properties of the underlying Banach spaces. The complexity of Y is measured using a factorization norm (also called the max-norm): γ 2 (Y ) = min The minimum is taken over all possible factorizations of Y = U V * , and the norm X 2 → n ∞ = max i j X 2 ij returns the largest 2 norm of a row. So, equivalently, γ 2 (Y ) = min where u i and v i are the rows of U and V . See Linial, Mendelson, Schechtman, and Shraibman [2007] for a number of results about the norm γ 2 . In particular, note that Property (38) says that γ 2 is sub-multiplicative under the Hadamard product [Lee, Shraibman, anď Spalek, 2008, Heiman, Schechtman, andShraibman, 2014] and will be used in our proof. Properties (39) and (40) relate γ 2 to two common complexity measures of matrices, the trace norm (sum of singular values, i.e. the m 2 → n 2 nuclear norm) and rank. Note also the well-known fact that where X F = ij X 2 ij is the Frobenius norm. We see that the trace norm constrains factors U and V to be small on average via · F , whereas the norm γ 2 is similar but constrains factors uniformly via · 2 → n ∞ . However, we should note that computing γ 2 (Y ) is more costly than the trace norm, which can be performed with just the singular value decomposition, although still possible in polynomial time with convex programming [Heiman, Schechtman, and Shraibman, 2014].
8.2. Matrix completion generalization bounds. The method of matrix completion that we study is to return the matrix X which is the solution to: The γ 2 norm was first proposed by Srebro and Shraibman [2005], Srebro, Rennie, and Jaakkola [2005] as a robust complexity measure, and it was shown to be an effective and practical regularization on real datasets [Lee, Recht, Srebro, Tropp, andSalakhutdinov, 2010, Recht andRé, 2013]. Heiman, Schechtman, and Shraibman [2014] analyze the performance of the convex program (41) for a square matrix Y using an expander argument, assuming that E is the edge set of a d-regular graph with second eigenvalue η. They obtain the following theorem: Theorem 23 (Heiman, Schechtman, and Shraibman [2014]). Let E be the set of edges of a d-regular graph with second eigenvalue bound η. For every Y ∈ R n×n , ifŶ is the output of the optimization problem (41), then 1 n 2 Ŷ − Y 2 F ≤ cγ 2 (Y ) 2 η d , where c = 8K G ≤ 14.3 is a universal constant and · F is the Frobenius norm.
Considering sampling following the biadjacency matrix of a bipartite graph, we find a similar result which also applies to rectangular matrices. If n = m and d 1 = d 2 = d, our bound is equivalent to that of Theorem 23, but with constants improved by a factor of two due to stronger mixing in bipartite graphs. Intuitively, using a biadjacency matrix is a "more random" way of sampling than using an adjacency matrix, since it is not symmetric.
Theorem 24. Let E be the set of edges of a (d 1 , d 2 )-regular graph with second eigenvalue bound η. For every Y ∈ R n×m , ifŶ is the output of the optimization problem (41), then where c = 4K G ≤ 7.13.
8.3. Noisy matrix completion bounds. Furthermore, our analysis easily extends to the case where the matrix we observe is corrupted with noise. As mentioned in the above remark, similar results will hold for the trace norm. In the noisy case, we solve the problem (42) minimize X γ 2 (X) subject to 1 |E| (i,j)∈E (X ij − Z ij ) 2 ≤ δ 2 and obtain the following theorem: Theorem 25. Suppose we observe Z ij = Y ij + ij with bounded error Then solving the optimization problem (42) will yield a bound of where c = 4K G ≤ 7.13.
Proof. DenoteŶ the solution to P42. It will be useful to introduce the sampling operator P E : R n×m → R n×m , where (P E (X)) ij = X ij if (i, j) ∈ E and 0 otherwise. Again let R = (Ŷ − Y ) • (Ŷ − Y ) be the matrix of squared errors, then However, since Y is a feasible solution to P42, we have γ 2 (Ŷ ) ≤ γ 2 (Y ).
Applying (38) and the triangle inequality, Using the triangle inequality again gives taking into account the bound on the observation errors. Because 8.4. Application of the spectral gap. Theorem 24 provides a bound on the mean squared error of the approximation X. Directly applying Theorem 4, we obtain the following bound on the generalization error of the algorithm using a random biregular, bipartite graph: Corollary 26. Let E be sampled from a G(n, m, d 1 , d 2 ) random graph. For every Y ∈ R n×m , ifŶ is the output of the optimization problem (41), then where c = 4K G ≤ 7.13 is a universal constant.