Maker–Breaker percolation games I: crossing grids

Abstract Motivated by problems in percolation theory, we study the following two-player positional game. Let Λm×n be a rectangular grid-graph with m vertices in each row and n vertices in each column. Two players, Maker and Breaker, play in alternating turns. On each of her turns, Maker claims p (as yet unclaimed) edges of the board Λm×n, while on each of his turns Breaker claims q (as yet unclaimed) edges of the board and destroys them. Maker wins the game if she manages to claim all the edges of a crossing path joining the left-hand side of the board to its right-hand side, otherwise Breaker wins. We call this game the (p, q)-crossing game on Λm×n. Given m, n ∈ ℕ, for which pairs (p, q) does Maker have a winning strategy for the (p, q)-crossing game on Λm×n? The (1, 1)-case corresponds exactly to the popular game of Bridg-it, which is well understood due to it being a special case of the older Shannon switching game. In this paper we study the general (p, q)-case. Our main result is to establish the following transition. If p ≥ 2q, then Maker wins the game on arbitrarily long versions of the narrowest board possible, that is, Maker has a winning strategy for the (2q, q)-crossing game on Λm×(q+1) for any m ∈ ℕ. If p ≤ 2q − 1, then for every width n of the board, Breaker has a winning strategy for the (p, q)-crossing game on Λm×n for all sufficiently large board-lengths m. Our winning strategies in both cases adapt more generally to other grids and crossing games. In addition we pose many new questions and problems.


Results and organization of the paper
Biased Maker-Breaker games are a central area of research on positional games, in particular due to their intriguing and deep connections to resilience phenomena in discrete random structures. Much of the research on Maker-Breaker games has focused on the case where the 'board' is a complete hypergraph, or an arithmetically defined hypergraph corresponding to all the solutions to a system of equations in some finite integer interval. Typically the 'winning sets' that Maker seeks to claim in these games all have the same size.
In this paper we focus on boards and winning sets with rather different properties: we consider rectangular grid-graphs, and our winning sets consist of crossing paths, whose sizes can vary wildly.
Explicitly, we define the (p, q)-crossing game as follows. Let m×n be the rectangular gridgraph with m vertices in each row and n vertices in each column; our convention is to call m the length and n the width of the board. Two players, Maker and Breaker, play in alternating turns, with Maker playing first. On each of her turns, Maker claims p (as yet unclaimed) edges of the board m×n , while on each of his turns Breaker claims q (as yet unclaimed) edges of the board and destroys them. The game ends if either Maker manages to claim all the edges of a crossing path joining the left-hand side of the board to its right-hand side, in which case we declare her the winner, or if the board reaches a state where it is not longer possible for Maker to ever claim such a left-right crossing path, in which case we declare Breaker the winner. A natural question to ask is, given positive integers m, n, p, q, which player has a winning strategy for the (p, q)-crossing game on m×n ?
Our main result is the following two theorems, which show that the game undergoes a sharp transition at p = 2q. In other words, if Maker has at least twice the power of Breaker and the board is wide enough that Breaker cannot win in a single turn, then Maker wins the game no matter how long the board is. On the other hand, if Maker has strictly less than twice Breaker's power, then Breaker has a winning strategy on all boards that are sufficiently long (with respect to the board width n and Breaker's power q). The proofs of Theorems 1.1 and 1.2 can be found in Sections 4 and 3 respectively. As we remark in Section 5, our strategies for these two games adapt to a number of other games and grids; see in particular Theorem 5.3 for a generalization of Theorem 1.2.
The rest of this paper is organized as follows. In Section 1.2 we give some background and motivation for our problem. In Section 2 we go over some basic definitions and prove some elementary results on crossing games. For completeness, we also record a winning strategy for the (1, 1)-crossing game on (n+1)×n which allows Maker to play any edge on her first move (this might be folklore: that Maker has a winning strategy is well known, but we could not find a reference to the fact any first move will do). We end this paper in Section 6 with a number of questions and open problems, including a discussion of connections to the study of fugacity in statistical physics and some enumeration problems in analytic combinatorics.

Background and motivation
Maker-Breaker games are a class of positional games which have attracted considerable attention from researchers in combinatorics and discrete probability. The set-up is simple. We have a finite board (a set) X, and a collection W of subsets of X called winning sets. Two players, Maker and Breaker, take turns to claim as yet unclaimed elements of X. Maker (typically) plays first, and claims a elements in each of her turns, while Breaker claims b elements on each of his. Maker's aim is to claim all the elements of a winning set W ∈ W , while Breaker's aim is to thwart her, i.e. to claim at least one element from each winning set. Since the board is finite, no draws are allowed, and the main question is to determine who has a winning strategy.
Maker-Breaker games on graphs have been extensively studied since an influential paper of Chvátal and Erdős [10] in the late 1970s. Important examples of such games include the connectivity game, the k-clique game and the Hamiltonicity game, where the board X consists of the edges of a complete graph on n vertices and the winning sets are spanning trees, k-cliques and Hamiltonian cycles respectively.
In their paper Chvátal and Erdős proved that, for a variety of such games, if n is sufficiently large, then Maker has a winning strategy in the case where a = b = 1. In each case they then asked how large a bias b = b(n) was required for the (1, b) versions of these games to turn into Breaker's win and provided a surprising and influential random graph heuristic for determining the value of these threshold biases. Namely, according to this heuristic the threshold bias b at which Breaker has a winning strategy should lie close to the threshold b for a set of 1/(b + 1) n 2 edges chosen uniformly at random to fail, with high probability, to contain any winning set.
This random graph heuristic has been widely investigated by a large number of researchers, in particular by Beck [1,2,3,4] and Bednarska and Łuczak [6,7]. Its correctness has been rigorously established for some games, such as the connectivity [13], k-clique [5] and Hamiltonicity [17] games, but it has also been shown to fail for other games such as general H-games [6] (where the winning sets are copies of some fixed, finite graph H containing at least three non-isolated vertices).
In a different direction, Stojaković and Szabó [21] considered playing these Maker-Breaker games on random boards, by having X consist of the edges of an Erdős-Rényi random graph G n,p . As having fewer edges cannot help Maker, the natural question in this setting is: What is the threshold p such that if p p , then with probability 1 − o(1) Maker has a winning strategy for the (1, 1)-crossing game on G n,p , while if p p , then with probability 1 − o(1) Breaker has a winning strategy? Stojaković and Szabó showed that for some games, such as the connectivity games, 1/b and p are of the same order, but that for others, such as the triangle game, no such relationship holds.
The intriguing connections between Maker-Breaker games and deep phenomena in discrete probability (in addition to their obvious combinatorial appeal) have led to an abiding interest in Maker-Breaker games. In addition to the graph-theoretic setting mentioned above, Maker-Breaker games have also been studied in arithmetic settings, where the board X corresponds to some integer interval, and the winning sets are r-tuples of integers that are solutions to systems of linear equations in r variables. We refer a reader to the 2008 monograph of Beck [5] for a summary and exposition of some of the many results in the area known up to that point, and to the preprint of Kusch, Rué, Spiegel and Szabó [18] for some recent progress on hypergraph and arithmetic Maker-Breaker game, in particular establishing the tightness of the Bednarska-Łuczak random Maker strategies for a very general class of games.
In this paper we investigate (p, q)-crossing games on rectangular grid-graphs. These differ from previous Maker-Breaker games on graphs in a number of ways: grid-graphs are far sparser than previously considered boards; the 'winning sets' , consisting of crossing paths, vary wildly in size, whereas in the previously studied examples they tended to all have the same size. Finally, we let both the aspect ratios (m : n) for our rectangular grids and the powers of both Maker and Breaker (the parameters p and q) vary, whereas in previous games on graphs only Breaker's power varied, and a notion of aspect ratio was absent.
Our motivation for investigating crossing games comes from percolation theory. Percolation theory is a branch of probability theory concerned, broadly speaking, with the study of random subgraphs of infinite lattices, and in particular the emergence of infinite connected components. Since its inception in Oxford in the late 1950s, it has blossomed into a beautiful and rich area of research. One of the most celebrated results in percolation theory is, without doubt, the Harris-Kesten theorem [14,16], which we state below.
Let denote the square integer lattice, that is, the graph on Z 2 whose edges consist of pairs of vertices v, w ∈ Z 2 lying at Euclidean distance v − w = 1 from each other. The p-random measure μ p is, informally, the probability measure on subsets of E( ) that includes each edge with We began investigating Maker-Breaker percolation games, where Maker tries to ensure the origin is contained in an infinite component, to see if some analogue of the Chvatál-Erdős probabilistic intuition could hold in this setting also, despite the presence of an infinite probability space. One of the key tools in modern proofs of the Harris-Kesten theorem are the so-called Russo-Seymour-Welsh lemmas giving bounds on the probability of crossing rectangles of various aspect ratios at p = 1/2. Unsurprisingly, crossing games turned out to play an important role in our arguments when studying percolation games. In particular, the results we establish in this paper are key ingredients in the proofs of our main results on Maker-Breaker percolation games that we establish in the sequel [11] to the present paper.
Besides the motivation from percolation theory, we should like to stress also that crossing games are paradigmatic representatives of an important class of positional games. Indeed, they are related to the older and much-studied game of Hex, and the (1, 1)-crossing game we study here is in fact the commercially available game of Bridg-it. Which of the players wins Bridg-it under perfect plays has been known since the late 1960s, thanks to Lehman's resolution of the more general Shannon switching game [19]. The relationship between our work in the present paper and these older games is discussed in greater detail in Sections 2.3 and 5.

Basic definitions and notation
A graph is a pair G = (V, E), where V = V(G) is a set of vertices and E = E(G) is a set of pairs from V which form the edges of G. A subgraph H of G is a graph with V(H) ⊆ V(G) and E(H) ⊆ E(G). Given n ∈ N, let [n] = {1, 2, . . . n}. In this paper we often identify a graph with its edge-set when the underlying vertex-set is clear from the context. For the remainder of this paper, unless otherwise stated, the variables m, n, p, q, x and y will always be natural numbers.
Let denote the square integer lattice, that is, the graph on Z 2 whose edges consist of pairs of vertices v, w ∈ Z 2 lying at Euclidean distance v − w = 1 from each other. Given m and n, let m×n be the finite subgraph of induced by the vertex-set {(x, y) : x ∈ [m], y ∈ [n]}. If e is a horizontal edge in m×n , i.e. e = {(x, y), (x + 1, y)} for some x, y, then we identify e with its midpoint and write e = (x + 0.5, y). Similarly, if e is a vertical edge in m×n , i.e. e = {(x, y), (x, y + 1)} for some x, y, then we denote e by its midpoint and write e = (x, y + 0.5). Let S m×n be the graph obtained by taking m×n and removing all the edges from the set that is, all the leftmost and rightmost vertical edges in m×n . We say a path in S m×n or m×n is a left-right crossing path if it joins some vertex (1, y) on the left-hand side of the board to some vertex (n, y ) on the right-hand side of the board, where y, y ∈ [n].
We define the (p, q)-crossing game on S m×n (respectively m×n ) as follows. Two players, Maker and Breaker, play in alternating turns. Maker plays first, and on each of her turns claims p (as yet unclaimed) edges of the board S m×n (respectively m×n ); Breaker on each of his turns answers by claiming q (as yet unclaimed) edges of the board. The game ends if either Maker manages to claim all the edges of a left-right crossing path, in which case we declare her the winner, or if the board reaches a state where it is not longer possible for Maker to ever claim such a left-right crossing path, in which case we declare Breaker the winner.
We shall work with crossing games on the board S m×n rather than m×n for technical reasons, but for all practical purposes the two games are the same: it can never be in a player's interest to claim an edge in E( m×n ) \ E(S m×n ) (so as far as winning strategies the two games are identical) and the removal of these edges makes it easier to define a dual board, as we shall shortly do below.
As a convention, we only consider the outcomes of the games under perfect play. If Maker has a winning strategy for given values m, n, p, q, we say that the corresponding game is a Maker win, otherwise we say it is a Breaker win. Further, we follow the convention that edges claimed by Maker are coloured blue, while edges (and their dual) claimed by Breaker are coloured red.
We now define duality for our boards. The dual * of is the graph obtained from by shifting its vertex-set by (0.5, 0.5), i.e. the graph with vertex-set and edge-set consisting of all pairs of vertices lying at Euclidean distance 1 from each other. We refer to the vertices and edges of * as dual vertices and dual edges respectively. Just as for , we identify each dual edge with its midpoint. Given an edge e ∈ E( ), its dual is defined to be the dual edge e * ∈ E( * ) such that e and e * have the same midpoint. So, for example, the dual of the horizontal edge e = (x + 0.5, y) ∈ E( ) is the vertical dual edge e * = (x + 0.5, y) * that lies between the dual vertices (x + 0.5, y − 0.5) and (x + 0.5, y + 0.5), and the dual of the vertical edge e = (x, y + 0.5) ∈ E( ) is the horizontal dual edge e * = (x, y + 0.5) * that lies between the dual vertices (x − 0.5, y + 0.5) and (x + 0.5, y + 0.5).
Given a set of edges E in , let E * = {e * : e ∈ E}. Given a subgraph = (V, E) of (finite or infinite), we define its dual * to be the graph with edge-set E * , and vertex-set consisting of all dual vertices incident to some dual edge e * ∈ E * . As an example, we have that S * m×n is a rotated and translated copy of the graph S (n+1)×(m−1) . In particular, S (n+1)×n is self-dual, being isomorphic to its dual graph. Similarly, the square-integer lattice is self-dual.
In the context of Maker-Breaker crossing games, duality is important as it shows Breaker can be viewed as a 'dual Maker' aiming to build a vertical crossing dual path. As Lemma 2.1 shows, the two players in our game actually have similar aims when viewed through the prism of duality: Maker and Breaker are competing for resources (edges/dual edges) to build their winning sets (left-right crossing paths/top-bottom crossing dual paths). To reflect the symmetry of their competing aims, we will sometimes refer to Maker as the horizontal player, denoting her by H , and to Breaker as the vertical player, denoting him by V . Further, we will often think of Breaker as playing on the dual board and claiming dual edges on each of his turns rather than the corresponding edges of the original board (as they do in the formal game definition).
With the help of duality, one can define the boundary of a connected component in or * .
Definition 2.2 (external boundary). For a finite connected subgraph of with vertex-set C, there is a unique infinite connected component C ∞ of the subgraph of induced by the vertices in Z 2 \ C. The external boundary ∂ ∞ C of C is the collection of dual edges from * that are dual to edges joining C to C ∞ in . The external boundary for a set of dual vertices from a finite connected subgraph of * is defined mutatis mutandis.
It can be shown (see [8, Lemma 1, Chapter 1]) that the external boundary ∂ ∞ C of the vertex-set C of a finite connected subgraph H of is a dual cycle with C in its interior. A key tool in our proof of Theorem 1.1 will be the following bound on the size of the boundary cycle in terms of the number of edges in H. Proof. We prove the lemma by induction on k. The dual boundary cycle of a single edge has size 6, so our claim holds in the base case k = 1. Now assume that we have shown our claim holds for all components consisting of at most k edges, and let A be a set of k + 1 edges forming a connected component in with vertex-set C.
If A contains a cycle, then there exists some edge e ∈ A such that A \ {e} also gives a connected subgraph with vertex-set C, and so by our inductive hypothesis |∂ ∞ C| 2k + 4. On the other hand if A is acyclic, then the corresponding subgraph is a tree, and hence has at least one leaf (vertex of degree one). Thus there exists an edge e ∈ A such that A = A \ {e} spans all but one vertex of C, say the vertex v. Let B = ∂ ∞ (C \ {v}); by the inductive hypothesis we know that |B| 2k + 4. If e is not dual to any dual edge in B, then B is also the dual boundary cycle for C, and we are done. If on the other hand we have e * ∈ B, let f 1 , f 2 and f 3 be the three dual edges that together with e * form the boundary cycle around the single vertex v. Since e is the only edge of A incident with v, none of f 1 , f 2 , f 3 lie in A. The union of these dual edges with B contains the external boundary ∂ ∞ C of C, and so this external boundary has size at most |B| + 2 2(k + 1) + 4, as required.

Elementary bounds on winning boards for the (p, q)-crossing game
In this subsection, we make some elementary observations about winning boards for crossing games for general (p, q). We begin by giving some trivial bounds on the identity of the winner in the (p, p)-crossing game under optimal play on various different boards. Proof. Part (i) is immediate by strategy-stealing: it is enough to show that Maker can win on the self-dual board S (n+1)×n (playing on a narrower board can only help Maker). Suppose for contradiction that Breaker, playing second, had a winning strategy. Then Maker can play p arbitrary moves on her first turn, and from then on pretend to be Breaker playing on S * (n+1)×n , using Breaker's putative winning strategy to respond to Breaker's actual moves (and making arbitrary moves if ever asked to claim an edge she has already claimed). Maker's initial moves can never hurt her, and thus this is a winning strategy -contradicting our assumption that Breaker has a winning strategy, since we know this game can never end in a draw. Thus Maker must have a winning strategy.
For part (ii) it is enough to show that Breaker can win on the board S (p+1)(n+1)×n (playing on a wider board can only help Breaker). We divide up this board into p + 1 copies of S (n+1)×n (plus some extra edges which we ignore). On her first move, Maker must fail to claim an edge in at least one of these copies. Thereafter Breaker plays entirely in this copy. Since S (n+1)×n is self-dual and Breaker is playing first in the (p, p)-crossing game on this copy, he has a winning strategy. (Formally, this is not quite the (p, p)-crossing game: by playing on other boards, Maker could play fewer than her p moves in our chosen copy of S (n+1)×n in any given turn -but this can never help her.) Proposition 2.5. Breaker has a winning strategy for the (p, p + 5r)-crossing game on S m×n for all n > r and m p/r (n + 1).

Proof.
As before, it is enough to show that Breaker can win on the board S m×n with m = p/r (n + 1) (playing on a wider board can only help Breaker). Divide the board into p/r copies of S (n+1)×n . By our bounds on n and m, Maker cannot have won on her first turn (since m > p + 1). Also, by the pigeonhole principle, there is one such copy on which Maker has played at most r moves on her first turn. For the remainder of the game, Breaker will focus his efforts solely on this board, and so we may view Breaker as playing first on an (n + 1) × n board where r edges have been pre-emptively claimed by Maker.
Breaker will only use his extra power of 5r in his first turn, to 'neutralize' Maker's edges by ensuring they can never be part of a left-right crossing path, and will otherwise follow his winning strategy for the (p, p)-crossing game on an (n + 1) × m board when he plays first -a strategy which exists by Proposition 2.4 and the self-duality of S (n+1)×n . (For completeness, other than on his first move, he plays arbitrary moves with his extra 5r edges and if ever requested to play a previously claimed edge.) Provided his first-turn 'neutralization' works, Breaker will clearly win the game. Lemma 2.3 established that a connected subgraph of Z 2 with k 1 edges has a dual boundary cycle of size at most 2k + 4. Further, observe that if we claim all but one of the edges in the dual boundary cycle to one of Maker's connected components C, then no left-right crossing path Maker makes can go through C, and it makes no difference to the outcome of the game if all of the edges inside C had been claimed by Breaker instead. Thus, to neutralize Maker's (at most) r initial edge in Breaker's chosen sub-board, Breaker claims all but one dual edge from the boundary cycles of each of the corresponding connected components. By our bound from Lemma 2.3, this requires a total of at most 5r edges, which is exactly the extra power Breaker has.
Clearly the bounds on m and n in Proposition 2.4 and 2.5 are quite unsatisfactory, and we do not believe for a moment that they are tight. See Section 6 for a number of questions and conjectures pertaining to this.

The (1, 1)-crossing game: Bridg-it and the Shannon switching game
The (1, 1)-crossing game played on S m×n is also known as Bridg-it (sometimes referred to as Bridge-it), and was first invented by David Gale. Traditionally Bridg-it is played on a self-dual grid, usually S 6×5 or S 7×6 , but here we relax the definition to allow play on any grid-size. Bridg-it bears some similarities to the celebrated game of Hex, which is another positional crossing game played on the faces of a hexagonal lattice (see [15] for a formal definition of Hex), but Bridg-it is much simpler and better understood.
By Proposition 2.4, we know that in Bridg-it there is always a winning strategy (via strategystealing) for the first player, H , when m n + 1. When m > n + 1 the vertical player V has a winning strategy which involves mirroring H 's moves through an appropriate reflection of the grid. These two strategies (strategy-stealing and reflection strategy) have counterparts in Hex (see e.g. [15]). However, the strategy-stealing argument does not provide an explicit winning strategy for H but merely proves its existence, and constructing such a strategy for (n + 1) × n Hexboards is an extremely hard computational problem even for small n.
By contrast, there are several different explicit strategies that H can use to win in Bridg-it whenever m n + 1. The first of these to be discovered was a simple but elegant edge-pairing strategy due to Gross in 1961; see [5, p. 66] for a description. A different strategy can be read out of a winning strategy due to Lehman [19] for a different combinatorial game, known as the Shannon switching game. In addition to the crossing games studied in this paper, ideas related to Lehman's winning strategy for the Shannon switching game play important role in our study of Maker-Breaker percolation games in our companion paper [11]. For these reasons and for completeness, we describe the Shannon switching game and its application to Bridg-it in detail below.
That strategies for the Shannon switching game may be used to construct winning strategies for H in Bridg-it is a well-known folklore result, which has been recorded in a number of places; see e.g. [5, p. 67]. We present the argument below and offer the modest improvement that, on S m×n with m n + 1, Lehman's strategy allows H the freedom of picking any edge of the board on her first move and still win the entire game. (As far as we are aware, this observation has not appeared in the literature before.)

The Shannon switching game
The Shannon switching game is a positional game invented by Claude Shannon. The game is played on a triple (G, a, b), where G is a multigraph and a and b are two distinguished vertices of G. At the start of the game every edge is classified as unsafe. Two players, Cut and Join, play in alternating turns in which they claim unsafe edges. Cut plays first, and on each of his turns picks an unsafe edge and deletes it from G. Join plays second, and on each of her turns picks an unsafe edge and marks it as safe. The game ends when there are no unsafe edges left. Join wins if, at the end of the game, there exists a path of safe edges from a to b, and otherwise Cut wins if no such path exists. (Thus in our games Cut and Join correspond to Breaker and Maker respectively.) The Shannon switching game was solved by Lehman [19], who, for each graph, determined which of the players has a winning strategy and in addition gave an explicit description of a winning strategy in each case. Lehman showed that there is a winning strategy for Join in the Shannon switching game played on (G, a, b) if and only if G has a 2-positive subgraph that contains both a and b. (In fact Lehman achieved his result by generalizing the Shannon switching game to a game played on matroids and solving it in that more general setting, but we will not be concerned with matroids in this paper.) It is the if direction of this statement that we will need and so we reproduce its (simple) proof here. For the interested reader, a relatively short and simple proof of the only if direction of the statement (in the language of graphs rather than matroids) was given by Mansfield in [20]. [19]). Suppose a, b are vertices in a multigraph G such that there exists a 2-positive subgraph of G containing both a and b. Then Join has a winning strategy for the Shannon switching game played on (G, a, b) Proof. Suppose G has a 2-positive subgraph that contains both a and b. We may pass to this subgraph and assume that G is itself 2-positive. Let G 1 and G 2 be two edge-disjoint connected spanning subgraphs of G. For each t 0, let C t be the set of the first t edges that Cut deletes from G, and let S t be the set of the first t edges of G that Join marks as safe. Moreover, for each i = 1, 2 let

Proposition 2.7 (Lehman
Join's strategy will be to ensure that, for all t 0, the graphs G t 1 and G t 2 are both connected spanning subgraphs of G. We use induction on t to show she can achieve this; it is clear that G t 1 and G t 2 are both connected spanning subgraphs of G when t = 0. Suppose that G t−1 1 and G t−1 2 are both connected spanning subgraphs of G. Without loss of generality, we may assume that the next edge that Cut deletes is an edge of G 1 , say the edge e = {x, y}. If G t−1 1 \ {e} is still spanning and connected, then Join may play their next move arbitrarily. If G t−1 1 \ {e} is not spanning and connected, then it consists of exactly two components, one containing x and the other containing y. As G t−1 2 is spanning and connected it contains a path from x to y. As G t−1 1 is spanning, we must have that there exists an edge f of this path that lies between the two components of G t−1 On her move, Join marks the edge f as safe and adds it to S t−1 to form S t . This ensures G t 1 is once again a connected spanning subgraph of G, as required. Furthermore G t 2 contains G t−1 2 as a subgraph, and so remains a connected spanning subgraph of G. This proves our inductive statement. When the game ends, say after Join has marked r edges as safe and all other edges are unsafe, we have that G r 1 = G r 2 = J r , which forms a spanning connected subgraph of G. In particular, there is a path of safe edges from a to b. It is easy to extend Join's winning strategy from Proposition 2.7 to the (k, k)-Shannon switching games where each player is allowed to claim k edges on each of their turns. We leave the proof as an exercise for the reader.

Winning strategy for Maker in Bridg-it
Theorem 2.9. Maker has a winning strategy for the (1, 1)-crossing game on S (n+1)×n (i.e. the game of Bridg-it) that allows her to choose any edge she wants on her first move.
Proof. We begin by 2-colouring the edges of S (n+1)×n . All horizontal edges (i.e. all edges of the form (x, y + 0.5)) are assigned the colour green, while all vertical edges (i.e. all edges of the form (x, y + 0.5)) are coloured orange. The horizontal player H (Maker) then picks an arbitrary edge e as her first edge and colours it blue. Based on the choice of e, we define a set A of green edges which H will recolour and use in her strategy.
If e is a green edge, we let A be any set of n − 1 green edges such that no two edges in  ,c) The graphs G 1 and G 2 , respectively, which are the graphs that arise from the colouring inherited by G.
were an orange edge, say e = (x, y + 0.5), then let f 1 = (x + 0.5, y) and f 2 = (x − 0.5, y + 1). Let A be any set of n − 2 green edges such that no two edges in A ∪ {f 1 , f 2 } have the same x coordinate, and no two edges in In either case we recolour all the edges in A with the colour orange. Let G be the graph formed from S (n+1)×n by contracting all vertices (1, y) into a single vertex a, and contracting all vertices of the form (n + 1, y) into a single vertex b. There is a one-to-one correspondence between the edges of S (n+1)×n and G, so we may consider the colouring that G inherits from S (n+1)×n . Let e be in edge in G that corresponds to the edge e in S (n+1)×n , i.e. the unique blue edge in the graph. Let G 1 be the subgraph of G whose edge-set consists of the set of green edges together with the unique blue edge e . Similarly, let G 2 be the subgraph of orange edges together with the blue edge e . See Figure 1 for an example of these graphs when the first edge that H chose was an orange edge.
It is easy to see that G 1 and G 2 are both connected spanning subgraphs of G containing a and b. Moreover, the only edge these two graphs share in common is the edge e that Maker has claimed on her first turn. Thus, if we consider this edge as 'safe' , then we know by Proposition 2.7 that Join has an explicit winning strategy on this graph when playing the Shannon switching game, where the two distinguished vertices a and b are the left-and rightmost vertices. Thus H 's strategy in Bridg-it is simply to lift Join's strategy from the Shannon switching game on G to the (1, 1)crossing game on S (n+1)×n . At the end of the Shannon switching game on G we know that Join has constructed a path of safe edges from a to b. When lifted back to S (n+1)×n , this path is a left-right crossing path of S (n+1)×n , as required.

The (2q − 1, q)-crossing game: Breaker wins on sufficiently long boards
In this section we prove Theorem 1.2, which states that if m is sufficiently large (with respect to q and n), then Breaker, also referred to as the vertical player V , has a winning strategy for the (2q − 1, q)-crossing game on S m×n .

Proof of Theorem 1.2. Let T be the number of edges in
We split the board S m 0 ×n into (6q − 2) T + 2q − 1 disjoint copies of S (n+1)×n , which we call strips. Recall our convention that edges claimed by Breaker are coloured red. At any point during the game, we say a strip is k-valid if it contains exactly k red edges and is in a winning position for V in the (1, 1)-crossing game on S (n+1)×n when V plays second. We say a strip is k-neutral if it contains exactly k red edges and is in a winning position for V in the (1, 1)-crossing game when V gets to play first. If a strip is neither k-valid nor k-neutral for any integer k, we say that it is invalid. Note that if a strip is k-valid, then it is also k-neutral.
We know by Proposition 2.4 that every strip is 0-neutral at the start of the game. The game begins with H playing edges in up to 2q − 1 different strips, possibly making them invalid in the process. At this point, the vertical player V 's strategy will proceed in T + 1 phases, with phase 0 starting after H 's initial turn. For each k ∈ {0, 1, . . . , T}, V 's strategy will ensure that at the beginning of phase k, (i) it is V 's turn to play, and (ii) there are at least (6p − 2) T−k k-neutral strips. Note that this implies that at the start of phase T there will be at least one T-neutral strip, which by definition must contain a path of red dual edges from the top of the strip to the bottom of the strip, and thus V wins the game.
Clearly (i) and (ii) both hold for k = 0. Let us now show that if (i) and (ii) both hold at the beginning of phase k, then V can ensure they both hold at the beginning of phase k + 1 too. On each turn in phase k, the vertical player V will choose q different k-neutral strips and play a single edge in each that turns these k-neutral strips into (k + 1)-valid strips. The horizontal player H can now distribute their 2q − 1 edges among all of the strips. Each edge that H plays can either turn a (k + 1)-valid strip into a (k + 1)-neutral strip, or turn a k-neutral or (k + 1)-neutral strip into an invalid one (or can be played in another kind of strip, in which case we ignore it).
For each t ∈ Z 0 , let A t = A t (k) be the number of (k + 1)-valid strips after a combined total of t edges have been claimed by the two players in phase k of the game (where for convenience we imagine the two players play the edges on their turn in some arbitrary order). Similarly, let B t = B t (k) be the number of (k + 1)-neutral strips after a combined total of t edges have been played by the two players in phase k.
How does R t vary with t? If the next edge to be claimed is one of V 's, then R t+1 = R t + 2. On the other hand, if the next edge to be claimed is one of H 's, then R t+1 R t − 1. As the two players claim a combined total of 3q − 1 edges on each turn of the game, we have that R r(3q−1) r for all r ∈ Z 0 , until either phase k ends or V runs out of k-neutral strips. Now V decides that phase k ends (and phase k + 1 begins) when R r(3q−1) 2(6q − 2) T−k−1 for some r ∈ Z 0 . Note that after H and V have both completed their turns, the number of k-neutral strips has decreased by at most 3q − 1. Thus, as the number of k-neutral strips at the start of phase k is at least (6q − 2) T−k , we know that the number of k-neutral strips for V to play in will not run out before R r(3q−1) 2(6q − 2) T−k−1 . As R r(3q−1) 2(6q − 2) T−k−1 , we have that the number of (k + 1)-neutral strips at the start of phase k + 1 is at least (6q − 2) T−k−1 and further that it is V 's turn to play, so that (i) and (ii) both hold as required.

The (2q, q)-crossing game: Maker wins on arbitrarily long and narrowest possible boards
In this section we prove Theorem 1.1, which states that if n q + 1, then Maker, also referred to as the horizontal player H , has a winning strategy for (2q, q)-crossing-game on S m×n , for any m ∈ Z 0 . Note that the condition n q + 1 is necessary, as if n q, then V could win the game in a single turn. We in fact prove a more general result, showing H can win the q-double-response game (defined below); this will not complicate the argument, and the greater generality will allow us to apply these results to the study of percolation games in the sequel to this paper [11]. A key idea in our proof will be to consider a third game, the secure game, where V plays one edge at at time but is given the extra power of reclaiming some of H 's edges. This will allow us to treat a (2q, q) game like a (2, 1) game, which is much more amenable to analysis, and we shall show that even with V 's extra powers, H still has a winning strategy.
Let S ∞×n be the infinite subgraph of induced by the vertex-set {(x, y) : x ∈ Z, y ∈ [n]}. The q-double-response game is a game played by two players, a horizontal player H and a vertical player V , on the edges of S ∞×n . The game begins with V playing first. On each turn t, V picks an integer r t ∈ [q] and then claims r t as yet unclaimed edges in S ∞×n for himself; then H answers by claiming 2r t as yet unclaimed edges in response to V 's move. In this game, V 's aim is to claim a set of edges corresponding to a top-bottom crossing path of dual edges, and we say V wins if he is able to do so. The horizontal player H 's aim is to prevent this from ever happening, and we say H wins the game if she is able to do so. We remark that throughout this section we will always view the game through the lens of duality, so that V always claims dual edges.
We will show that if n q + 1, then H has a winning strategy for the q-double-response game. Clearly this implies H has a winning strategy in (2q, q)-crossing-game on S m×n , playing as Maker (and even surrendering her first move). Thus Theorem 1.1 is immediate from the following. Before we prove Theorem 4.1, let us sketch the main ideas behind the proof and give some preliminary definitions. We define an arch to be a path of edges that starts and ends at a bottommost vertex or starts and ends at a top-most vertex in S ∞×n . Similarly, we define a dual arch to be a path of dual edges that starts and ends at a bottom-most dual vertex or starts and ends at a top-most dual vertex in S * ∞×n . We may assume that V never claims either a dual edge as red that would create a cycle of red dual edges or a dual arch of red dual edges. Indeed, if V plays such a dual edge e, then at any stage later on in the game, if there exists a path P of red dual edges from the top of the grid to the bottom of the grid, then there still exists such a path if we remove e. In particular, the result of the game cannot depend on the identity of the player who claimed e (or equivalently its dual e * ). Therefore, if such a dual edge e was played, we can ignore it and pretend V has claimed some other edge.
A key ingredient in the proof will be Lemma 2.3, which stated that if A is a set of k edges in Z 2 that form a connected component C, then the dual boundary cycle to C contains at most 2k + 4 dual edges. While we do not use Lemma 2.3 directly, it is the 'explanation' for why our proof works, and it will be helpful for the reader to keep it in mind throughout this section.
Suppose that H was able to ensure that at the end of each of her turns she has claimed every edge of every boundary cycle of every component created by V 's red dual edges. If so, then as any top-bottom dual crossing path needs at least n q + 1 dual edges, V would be unable to win on any turn -and so H clearly wins the game. Unfortunately H cannot always claim all the edges of every boundary cycle. For example, if V plays q pairwise disjoint and sufficiently spaced-out dual edges, then H would need 6q edges to claim all the edges in each of the boundary cycles of the q components formed by these dual edges. However, what H can hope for, given Lemma 2.3, is to claim all but at most four edges in every boundary cycle of every component of red dual edges. Our strategy will show that H can indeed do this, and can do it in such a way that V will never be able to create a top-bottom dual crossing path, even by connecting up components created over many different turns. To make this precise, we need some definitions.    We now come to the key definition of brackets. Underpinning our strategy for H is the fact that she can ensure the four edges in a component's boundary cycle she is unable to claim have a nice form, namely that of one of the following brackets. See Figure 2 for a picture of these different bracket types, together with their corners and interior dual vertices, as defined below.

Definition 4.4 (brackets). We say the edges
{(x + 0.5, y), (x + 1.5, y), (x + 2, y + 0.5), (x + 2, y + 1.5)} form a bracket of Type 1 if none of them are red. We call the vertices (x, y) and (x + 2, y + 2) the corners of the bracket, and we call the dual vertices (x + 0.5, y + 0.5), (x + 1.5, y + 0.5) and (x + 1.5, y + 1.5) the interior dual vertices of the bracket. We say the edges {(x + 0.5, y), (x + 1, y + 0.5), (x + 1.5, y + 1), (x + 2, y + 1.5)} form a bracket of Type 2 if none of them are red. We call the vertices (x, y) and (x + 2, y + 2) the corners of the bracket, and we call the dual vertices (x + 0.5, y + 0.5) and (x + 1.5, y + 1.5) the interior dual vertices of the bracket. We say the edges {(x, y − 0.5), (x + 0.5, y − 1), (x + 1, y − 0.5), (x + 1, y + 0.5)} form a bracket of Type 3 + if none of them are red. We call the vertices (x, y) and (x + 1, y + 1) the corners of the bracket, and we call the dual vertices (x + 0.5, y − 0.5) and (x + 0.5, y + 0.5) the interior dual vertices of the bracket. Finally, we say the edges {(x + 0.5, y), (x + 1.5, y), (x + 2, y + 0.5), (x + 1.5, y + 1)} form a bracket of Type 3 − if none of them are red. We call the vertices (x, y) and (x + 1, y + 1) the corners of the bracket, and we call the dual vertices (x + 0.5, y + 0.5) and (x + 1.5, y + 0.5) the interior dual vertices of the bracket. Remark 4.5. Any bracket of Type 1 or Type 2 is preserved under the reflection that switches its two corners. Moreover, if you reflect a bracket of Type 3 + through the reflection that switches its two corner vertices, then you end up with a bracket of Type 3 − , and vice versa. More generally the set of brackets is closed under reflections through lines parallel to x + y = 0.
We will make use of the above remark to reduce the amount of (tedious but necessary) casechecking required in the proof of Theorem 4.1.    . If C is a bottom component, then (as we may assume V never plays a dual edge that creates a dual arch of red dual edges) C contains a unique bottom-most dual vertex v = (x + 0.5, 0.5). We say that C is secure if there exist a non-red edge e = (x , 1.5) for some x ∈ Z with x x + 1, and a path P of blue edges from the vertex (x , 2) to the vertex (x, 1), such that (i) if x > x + 1, then e is in fact a blue edge, (ii) C is contained in the interior of P ∪ {e}, (iii) for every edge f ∈ P, at least one of the dual vertices of the dual edge f * is in C.
We say that the edge e is the bottom component C's gate, and if this edge e is blue, then we say that C is extra secure. Definition 4.8 (secure top component, gate). If C is a top component, then (as we may assume that V never plays a dual edge that creates a dual arch of red dual edges) C contains a unique topmost dual vertex v = (x + 0.5, n + 0.5). We say that C is secure if there exist a non-red edge e = (x , n − 0.5) for some x ∈ Z with x x + 1, and a path P of blue edges from the vertex (x , n − 1) to the vertex (x, n), such that (i) if x > x + 1 then e is in fact a blue edge, (ii) C is contained in the interior of P ∪ {e}, (iii) for every edge f ∈ P, at least one of the dual vertices of the dual edge f * is in C.
We say that the edge e is the top component C's gate, and if this edge e is blue, then we say that C is extra secure.
We say the grid is secure at a given stage of the game if every component is secure. Note that if the grid is secure, then no component can simultaneously be a top component and a bottom component, i.e. there is no top-bottom red dual crossing path. See Figure 3 for an example of a grid in a secure position.  Proof. Let us suppose that the grid is in a secure position and that V claims l dual edges and thereby creates a path of red dual edges P that connects the top of the grid to the bottom of the grid. Let {e 1 , . . . , e l } be the dual edges in P that V claimed in the order that they appear when one travels along P from the bottom of the grid to the top of the grid. We may assume that none of the dual edges in {e 1 , . . . , e l } has both of its end-points in the same component (before V takes his turn), as such an edge would be superfluous with respect to creating a top-down dual crossing path. We will show that l n, which in turn proves the lemma as n q + 1.
For each dual edge e i , let v − i and v + i be the end dual vertices of e i such that, when travelling along P from the bottom of the grid to the top of the grid, one traverses v − i before v + i . For each such dual edge, let x i ∈ Z and y i ∈ N be such that v + i = (x i + 0.5, y i + 0.5). We will show by induction on i that y i i. The statement is clear when i = 1 as either e 1 must be a dual edge that meets the bottom of the grid, and so y 1 = 1, or e 1 meets a bottom component, say C. As C is secure, we must have that e 1 lies across C's gate, say the edge f = (x, 1.5). As such we have that e 1 = f * and so y 1 = 1. Now suppose that y k k and consider the dual edge e k+1 . If e k and e k+1 have a dual vertex in common, then it is clear that y k+1 y k + 1 and so we are done. If e k and e k+1 do not share a dual vertex, then there must be some floating component C such that both e k and e k+1 are adjacent to a vertex in C, and the dual edges e k and e k+1 each lie across some edge from C's bracket B. If B is a bracket of Type 1 or Type 2 with corners (x, y) and (x + 2, y + 2), then we must have that y k y and y k+1 y + 1. Similarly, if B is a bracket of Type 3 + with corner vertices (x, y) and (x + 1, y + 1), then we must have that y k y − 1 and y k+1 y. Finally, if B is a bracket of Type 3 − with corner vertices (x, y) and (x + 1, y + 1), then we must have that y k y and y k+1 y + 1. In all cases we have shown that y k+1 y k + 1 and so we have proved our inductive claim.
We now show that we have l n. Suppose for contradiction that l n − 1, and consider the dual edge e l . We must have that e l meets either the top of the grid or a top component. As we showed above that y l l n − 1, we can rule out the first of these two possibilities: e l cannot meet the top of the grid. Thus e l meets a top component. Moreover, as this top component is secure, e l is a horizontal dual edge, y l = n − 1, and v − l lies to the right of v + l . Note that we cannot have v − l = v + l−1 as this would contradict the fact that y l−1 l − 1 n − 2. Thus the vertex v − l must be part of some floating component C, and the dual edge e l must lie across C's bracket B. However, the only way this would be possible is if B were a bracket of Type 3 + with corner vertices (x l + 1, n) and (x l + 2, n + 1). If this were the case, we must have that the dual vertex (x l + 1.5, n + 0.5) is also part of C, as it is an interior dual vertex of the bracket B. However, this would tell us that C is a top component and not a floating component. Therefore no such component C can exist, which gives the desired contradiction. Lemma 4.9 tells us that if the grid is secure at the start of V 's turn, then it is not possible for V to win in a single turn. We now show that if the grid is secure at the start of V 's turn, then, after V has claimed r q edges, H can return the grid to a secure state by placing at most 2r blue edges. This immediately implies that H wins the q-double-response game on S ∞×n , whenever n q + 1, and thus proves Theorem 4.1.
To show that H can always return the grid to a secure state at the end of each of her turns, we introduce the secure game. The main idea behind this game is that it allows H to respond to V 's edges one at a time. Definition 4.10 (secure game). The secure game is played by H and V on the graph S ∞×n . At any point in the game, some edges will be unclaimed, some red (claimed by V ), some blue (claimed by H ), and some will have become blue double-edges (claimed twice by H ).
On each of his turns, V claims an edge and colours it red. The edge he claims may be unclaimed, or may already be a blue edge or a blue double-edge, in which case V breaks these blue edges and replaces them with a red single edge. However, V 's choice of an edge (regardless of whether it is an unclaimed edge, a blue edge or a blue double-edge) is subject to three restrictions: (a) V is not allowed to claim an edge if doing so would create a red dual cycle or a red dual arch, (b) V is not allowed to claim an edge if doing so connects a top component to a bottom component, (c) if C is a floating component and P is the path of blue edges that helps secure C, then V is not allowed to claim an edge from P if doing so turns C into either a top or a bottom component.
Once V has played an edge e, H responds by claiming b + 2 edges and colouring them blue, where b is the number of blue edges broken by e, counting multiplicity. Thus H may respond with two, three or four edges. At any stage of this game, we say the grid is secure if two conditions are met. The first condition is that the grid is in a secure position as far as the q-double-response game is concerned (treating all blue double-edges as blue simple edges for that purpose). The second condition is that if C and C are distinct red components and P and P are the blue paths securing them, then every edge in the intersection P ∩ P is a blue double-edge.
We say H wins the game if she can ensure that at the end of each of her turns the game is in a secure position (i.e. the board remains secure however long we play). Otherwise we say V wins. Proof. We will show how H can win the secure game by supposing the grid is in a secure position, and describing how H should respond to any dual edge that the vertical player V claims. Suppose that V has played his single red dual edge e = {v 1 , v 2 }. We split into a number of different cases, determined by whether or not the dual vertices v 1 and v 2 are part of pre-existing components. Some of these cases are then split into further sub-cases depending on whether or not e lies across an existing blue edge or a bracket. none of the six following edges are red (as otherwise one of v 1 or v 2 would have been part of a component before e was played): (x, y + 0.5), (x + 0.5, y + 1), (x + 1, y + 0.5), (x + 1, y − 0.5), (x + 0.5, y − 1), (x, y − 0.5). The horizontal player H now claims the edges (x, y + 0.5) and (x + 0.5, y + 1) and colours them blue. The grid is now secure as the new component created by V is a floating component secured by the blue path P = {(x, y + 0.5), (x + 0.5, y + 1)} and the bracket B of Type 3 + with corner vertices (x, y) and (x + 1, y + 1).
We have now dealt with all possibilities in Case 1.

Case 2.
Before e is played, the vertex v 1 is part of some component C while the vertex v 2 is not part of any component.
Let P be the path of blue edges that secures the component C. If C is a floating component, let B be the bracket that, together with P, secures C. If instead C is a bottom or top component, let G be the gate that helps secure C. If v 2 lies in the interior of the cycle formed by P ∪ B, or the arch formed by P ∪ G, then the grid is still secure after e has been played, and so H may play her edges arbitrarily. If v 2 is not in the interior of the cycle formed by P ∪ B or the arch formed by P ∪ G, then we note that the edge e must lie across either P, B or G, as if this were not the case, then v 2 would be a vertex that contradicts the fact that C was secure before V 's move.
We first deal with the case that e crosses an edge of P, say the edge f ∈ P. We cannot have that v 2 is a top-or bottom-most dual vertex as V is not allowed to break a blue edge with a red edge that contains a top-or bottom-most vertex. As such, there exist three edges -let us call them g 1 , g 2 and g 3 -such that the set {f , g 1 , g 2 , g 3 } forms a closed loop around the vertex v 2 . As V played a red edge that breaks a single blue edge, we have that H is allowed to play three edges in response.
As v 2 is not part of any component, we have that the three edges {g 1 , g 2 , g 3 } are not red edges, and so H claims all three of them. These three edges, together with P \ {f }, form a path that, together with the bracket B, secures C ∪ {v 2 }.
Suppose next that C is a bottom component, and that the dual edge e lies across its gate G. Then e is of the form e = (x, 1.5) * , and the edges (x + 0.5, 2) and (x + 1, 1.5) are not red (as otherwise v 2 would be part of some pre-existing component). Thus the horizontal player H can claim these two edges and C ∪ {v 2 } is now extra secure. Similarly, if C is a top component, and the dual edge e lies across G then, writing e = (x, n − 0.5) * , we see that neither of the edges (x + 0.5, n − 1) and (x + 1, n − 0.5) are red (as otherwise v 2 would be part of some pre-existing component). Thus the horizontal player H can claim these two edges and C ∪ {v 2 } is now extra secure.
We now deal with the case where C is a floating component and e crosses an edge of its bracket B. We divide here into sub-cases, depending on the type of the bracket B. For each sub-case there are some further sub-sub-cases to consider, depending on which edge of the bracket B is crossed by e.
In all cases we will list the two edges f 1 and f 2 that constitute H 's response, as well as a new bracket B or a new gate G . The blue path P ∪ {f 1 , f 2 } \ {e} together with B or G will then secure the new red component C ∪ {v 2 }. Since v 2 is not part of a pre-existing red component, it will follow that the two edges f 1 and f 2 are not red edges (so that H is free to claim them, or to turn them into blue double-edges if she had already claimed them in the past) and further that none of the edges in the new bracket B or gate G are red (so that H 's move does indeed secure C ∪ {v 2 }, as claimed).  In our analysis, we will make use of Remark 4.5 on the closure of the family of brackets under reflections swapping their corners, which will allow us to greatly reduce the number of cases we need to check. Finally, before we dive into the case analysis, we would advise the reader to look at Figures 4 (Cases 2a, 2b), 5 (Case 2c), 6 (Cases 3a, 3b, 3c), and 7 (Cases 3d, 3e, 3f) in parallel with the proof, as the pictures there may greatly aid in visualizing the argument.

Case 2a.
The bracket B is a bracket of Type 1 with corner vertices (x, y) and (x + 2, y + 2) for some x, y.   Suppose instead y 2. If e is the dual edge (x + 0.5, y) * , then H plays the two edges (x, y − 0.5) and (x + 2, y + 1.5). The new bracket B is a bracket of Type 2 with corner vertices (x, y − 1) and (x + 2, y + 1). If e is the dual edge (x + 1.5, y) * , then H plays the two edges (x + 0.5, y) and (x + 2, y + 1.5). The new bracket B is a bracket of Type 3 + with corner vertices (x + 1, y) and (x + 2, y + 1). Finally, if e is the dual edge (x + 2, y + 0.5) * or the dual edge (x + 2, y + 1.5) * , then we consider the dual edge e and the bracket B under the reflection that switches the corners of B and determine our response by that given in the cases e = (x + 0.5, y) * or (x + 1.5, y) * , reflected back. By Remark 4.5, the new bracket B thus obtained is a valid bracket.
Case 2b. The bracket B is a bracket of Type 2 with corner vertices (x, y) and (x + 2, y + 2) for some x, y.
As in Case 2a, as B is fixed under the reflection that switches its corners, it is only necessary to deal with the cases e = (x + 0.5, y) * and e = (x + 1, y + 0.5) * .
If instead e is the dual edge (x + 1, y + 0.5) * , then there is in fact no need for H to play any edges (so she may play them arbitrarily). The new bracket B is a bracket of Type 1 with corner vertices (x, y) and (x + 2, y + 2).
Case 2c. The bracket B is a bracket of Type 3 + with corner vertices (x, y) and (x + 1, y + 1) for some x, y.
Suppose instead that v 2 is not a bottom-most dual vertex. If e is the dual edge (x, y − 0.5) * , then H plays the two edges (x − 0.5, y) and (x − 1, y − 0.5). The new bracket B is a bracket of Type 1 with corner vertices (x − 1, y − 1) and (x + 1, y + 1). If e is the dual edge (x + 0.5, y − 1) * , then H plays the two edges (x, y − 0.5) and (x + 1, y + 0.5). The new bracket B is a bracket of Type 3 + with corner vertices (x, y − 1) and (x + 1, y). If e is the dual edge (x + 1, y − 0.5) * , then H plays the two edges (x, y − 0.5) and (x + 1, y + 0.5). The new bracket B is a bracket of Type 3 − with corner vertices (x, y − 1) and (x + 1, y). Finally, if e is the dual edge (x + 1, y + 0.5) * , then H plays the two edges (x, y − 0.5) and (x + 1.5, y + 1). The new bracket B is a bracket of Type 2 with corner vertices (x, y − 1) and (x + 2, y + 1).

Case 2d.
The bracket B is a bracket of Type 3 − with corner vertices (x, y) and (x + 1, y + 1) for some x, y.
Finally, suppose that v 2 is not a bottom-most or top-most vertex. Then we consider the dual edge e and the bracket B under the reflection that switches the corners of B and determine our response using Case 2c (since B is mapped to a bracket of Type 3 + ), reflected back. By Remark 4.5, the new bracket B thus obtained is a valid bracket.

Case 3.
Before e is played, the vertex v 1 is part of some component C 1 while the vertex v 2 is part of some component C 2 .
We first note that C 1 and C 2 cannot be the same component, as V cannot claim any dual edges that would create a closed cycle of red dual edges (violating restriction (a) from the secure game https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0963548320000097 Downloaded from https://www.cambridge.org/core. IP address: 54.70.40.11, on 22 Aug 2021 at 13:01:20, subject to the Cambridge Core terms of use, available at definition). For each i = 1, 2, let P i and B i (or G i ) be the respective path and bracket (or gate) that makes C i a secure component.
We first deal with the case where e * ∈ P 1 ∩ P 2 . Observe that C 1 and C 2 must then both be floating components. Indeed, suppose one of the components, say C 1 , were a bottom component. Then C 2 cannot be a top or a floating component (else V 's move would violate restriction (b) or (c)) and further, C 2 cannot be a bottom component (else V 's move would create a red dual arch, violating restriction (a)), a contradiction. Thus neither of C 1 , C 2 can be a bottom component, and in a similar way neither of them can be a top component.
As e * ∈ P 1 ∩ P 2 and the board was secure before V 's turn, e * must have been a blue double-edge, and so H has four edges to respond with. Thus H plays all the edges in B 1 , of which there are at most four. It is easy to see that the new component C 1 ∪ C 2 is secured by some path contained in the set of edges (P 1 ∪ P 2 ∪ B 1 ) \ {e * } and the bracket B 2 .
We next deal with the case where e * is an edge that lies in both P 1 and B 2 (or G 2 ). By the same arguments as above (based on restrictions (a), (b) and (c)), we must have that C 2 is a floating component. The horizontal player H has three edges to respond with, and so she plays the three edges in B 2 \ {e * }. Once again, it is easy to see that the new component C 1 ∪ C 2 is secured by some path contained in the set of edges (P 1 ∪ P 2 ∪ B 1 ) \ {e * } and the bracket B 1 (or gate G 1 ).
Finally we need to deal with the case where e * is an edge that lies in both B 1 (or G 1 ) and B 2 (or G 2 ). We first note that it is not possible for either C 1 or C 2 to be top components. Indeed, if say C 1 was a top component, then C 2 must be a floating component (by restrictions (a) and (b)), yet there is no possible bracket for C 2 that can have an edge in common with G 1 . Next, let us suppose that C 1 is a bottom component whose gate G 1 consists of the edge (x, 1.5). By restrictions (a) and (b), C 2 is a floating component and B 2 must be a bracket of Type 3 + with corners (x, 2) and (x + 1, 3) (no other bracket type is compatible with G 1 ). The horizontal player H then plays the edges (x + 1, 1.5) and (x + 1, 2.5). The new component C 1 ∪ C 2 is a bottom component extra-secured by a path contained in the set of edges P 1 ∪ P 2 ∪ {(x + 1, 2.5)} and the gate {(x + 1, 1.5)}.
Finally, if C 1 and C 2 are both floating components, then we have to split into sub-cases, depending on the bracket type of B 1 and B 2 , and on which edge they share. Note to begin with that if B 1 , B 2 are both of Type 1 or 2 and have an edge in common, then they must share an interior vertex, contradicting the fact that C 1 and C 2 are distinct components. Thus, without loss of generality, we may assume that B 1 is a bracket of Type 3 + or 3 − . We deal below with the case where B 1 is a bracket of Type 3 + with corner vertices (x, y) and (x + 1, y + 1). The case where B 1 is a bracket of Type 3 − will then follow by considering the reflection switching B 1 's two corner vertices and making use of Remark 4.5.
In each of the following sub-cases we will list the set P 3 of two (or fewer) edges from B 1 ∪ B 2 \ {e * } that H plays and the location and type of a new bracket B. It is easy to check then that there is a blue path P contained within the edges of P 1 ∪ P 2 ∪ P 3 such that P and B together secure the new component C 1 ∪ C 2 . In the sub-cases below, we cover all ways in which a bracket of Type 3 + and another bracket could share an edge. In each sub-case (except Case 3g, which we deal with via a reflection and Remark 4.5), we let e = (x, y − 0.5) * be the dual edge played by V .
Case 3a. The bracket B 2 is a bracket of Type 1 with corner vertices (x − 2, y − 2) and (x, y).
In this case H plays the edge (x − 1.5, y − 1). The bracket of B is a bracket of Type 1 with corner vertices (x − 1, y − 1) and (x + 1, y + 1).
Case 3d. The bracket B 2 is a bracket of Type 3 + with corner vertices (x − 1, y − 1) and (x, y).
Case 3e. The bracket B 2 is a bracket of Type 3 + with corner vertices (x − 1, y) and (x, y + 1).
In this case H plays the edge (x − 1, y − 0.5). The bracket of B is a bracket of Type 1 with corner vertices (x − 1, y − 1) and (x + 1, y + 1).
This case is in fact already dealt with, as the situation is identical to the previous case up to the reflection switching the corners of B 2 .
With Cases 3a-3g above, we have covered all possible cases and shown that H has a winning strategy for the secure game.
With a winning strategy for the secure game in hand, we now show that in the q-doubleresponse game H can ensure that the grid is secure at the end of each of her turns.
Proof of Theorem 4.1. Suppose the grid is secure and let D be the set of dual edges claimed by V on his turn, where |D| = r q. The horizontal player H begins by picking a judicious ordering (to be specified later) of the elements of D as {e 1 , e 2 , . . . , e r }, and then proceeding as if she were playing the secure game, pretending that V plays e 1 , e 2 , . . ., e r in that order and responding to each e i in turn.
For each i = 1, . . . , r, let L i be the set of blue edges (including blue double-edges) that H has claimed after i of her turns have occurred in this auxiliary secure game. We do not include in L i any blue edge broken by V in any of his first i turns.
Recall that in the secure game, H responds to V 's claim of the dual edge e i with two, three or four edges, depending on whether e i breaks 0, 1 or 2 blue edges. As such, we have that |L i | 2i for all i = 1, . . . , r. Once H has gone through every dual edge of D she has a set L r of at most 2r edges such that if H claims all the edges in L r in response to V 's claim of D, then the grid is back to a secure position in the q-double/response game. The only problem that could occur in this scenario is that during some turn of the auxiliary secure game, say turn i, the dual edge e i claimed by V breaks one of the restrictions (a)-(c) we imposed on the secure game. We show below that this can be avoided by picking a judicious ordering on D. Combined with Lemma 4.11, this will complete the proof Theorem 4.1.
The first restriction (a) on V 's moves in the secure game is that V is not allowed to claim a dual edge as red if doing so would create a cycle or an arch of red dual edges. As we have shown that one can assume V never plays such an edge in the q-double-response game, this restriction will not be broken by any of the dual edges in D. The second restriction (b) is that V may not claim a dual edge if doing so connects a top component to a bottom component. We know by Lemma 4.9 that if the grid is secure at the beginning of a turn of the q-double-response game, then V cannot win in that turn. As such, there cannot be a dual edge in D that connects a top component to a bottom component, and so this restriction is not broken either.
The third and final restriction (c) is that if C is a floating component and P is the path of blue edges that helps secure C, then V may not claim a red dual edge that breaks a blue edge from P if claiming that dual edge would turn C into either a bottom or top component. It is here that our judicious ordering of D comes into play and ensures restriction (c) is respected.
We order the dual edges in D as follows. Due to restrictions (a) and (b), every top (respectively bottom) component is a rooted tree whose root is a top-most (respectively bottom-most) dual vertex. Let D be ordered in any way such that, if e, e ∈ D are two dual edges that are part of the same bottom or top component C and e is strictly closer in graph distance in C to the root of C than e , then e appears before e in the ordering of D. (Such an ordering clearly exists, by proceeding component by component.) We claim that ordering D in this way guarantees that V never breaks the third restriction when we play the dual edges one by one. Indeed, suppose there is a dual edge e i ∈ D such that before e i is played in the secure game, there exists a floating component C, secured by a path P and a bracket B, that becomes a bottom or top component once e i has been played. Given our ordering on D, no other edge of D meeting C can have been played in the secure game before e i . In particular, all the edges of P were present before V played the dual edge-set D in the q-double-response game and H introduced the auxiliary secure game. In particular, e i cannot break an edge of P (i.e. e must lie across B), and restriction (c) is respected. Remark 4.12. As pointed out by a referee, our proof of Theorem 1.1 implies that if p > 2q then when Maker plays a (p, q)-crossing game on S ∞×(q+1) , not only can she ensure that Breaker never claims a set of edges corresponding to a top-bottom dual crossing path, but in addition, using her p − 2q extra edges at each turn, she can actually build an unbounded component. In this sense, p > 2q is a strong Maker win. On the other hand, this is not true if p = 2q. Indeed, Breaker can follow the strategy of claiming on each of his turns the bottom q edges of a top-bottom dual crossing path that lies at graph distance at least 2q from any previously claimed edge. Then should Maker fail to respond by playing all her 2q edges within distance at most q from Breaker's edges, Breaker is able to complete a top-bottom dual crossing path on his next turn. Thus at p = 2q on S ∞×(q+1) , Maker has only just enough power to stymie Breaker, but is not able to actively construct a large structure for herself.

Other graphs and other games
The crossing games we study in this paper may be viewed as special cases of the following generalization of the Shannon switching game. Definition 5.1 ((p, q)-Shannon switching game). A Shannon game-triple is a triple (G, A, B), where G is a finite multigraph (possibly with loops) and A, B are sets of vertices from G. For p, q ∈ N, the (p, q)-Shannon switching game on (G, A, B) is played on the board E(G) as follows.
Two players, Maker and Breaker, play in alternating turns. Maker plays first and in each of her turns claims p (as yet unclaimed) edges of the board E(G); Breaker in each of his turns answers by claiming q (as yet unclaimed) edges of the board. Maker wins the game if she manages to claim all the edges of a path joining A to B (i.e. a path from some a ∈ A to some b ∈ B: we call such a path an A-B crossing path). Otherwise Breaker wins.
The (p, q)-crossing games we study in this paper are instances of the (p, q)-Shannon switching game on (G, A, B), where G = S m×n and A and B are the sets of left-hand side and right-hand side vertices of S m×n respectively. The generalized Shannon switching game satisfies some obvious monotonicity properties with regard to the board, which we record in Proposition 5.2 below. Given a multigraph G and two distinct vertices u, v ∈ V(G), let m G (u, v) denote the number of edges between u and v, and let m G (v) denote the number of loops at v. Let G be any multigraph obtained by taking G, deleting some vertex v ∈ V(G), replacing it with two adjacent vertices v 1 , v 2 , and then adding in edges adjacent to v 1 or v 2 until the relations are satisfied for all u ∈ V(G) \ {v}. We refer to this process as vertex-separation. Vertex separation may be thought of as an inverse operation to performing an edge-contraction of the edge {v 1 , v 2 } in G . Then the following hold.
Proof. We generalize the arguments of Theorem 1.2 as follows. At any point in the game, for 0 j p we say a strip S i is (k, j)-valid if it contains exactly kq red edges and is in a winning position for Breaker in the (p, q)-Shannon switching game on (G i , A i , B i ), with Maker getting to play any j edges first before the game resumes with it being Breaker's turn to play. If a strip is not (k, j)-valid for any 0 j p, then we say that it is invalid. As Breaker has a winning strategy for the (p, q)-Shannon switching game on (G i , A i , B i ) under BF rules for each i, we have that each strip starts as (0, 0)-valid. Note that for j j we have that any strip that is (k, j )-valid is also (k, j)-valid. Thus we say that a strip is exactly (k, j)-valid if it is (k, j)-valid but not (k, j + 1)-valid. Note that if any strip S i is (k, j)-valid, then Breaker can play q edges in S i and turn it in into a (k + 1, p)-valid strip. Indeed, as S i is (k, j)-valid, we know that it is also (k, 0)-valid, and so it is in a winning position for Breaker in the (p, q)-Shannon switching game where it is Breaker's turn to play. Breaker plays the q edges that a winning strategy would prescribe, so that the strip is in a winning position for Breaker in the (p, q)-Shannon switching game, even though it is Maker's turn to play. Thus Breaker has turned S i into a (k + 1, p)-valid strip. The game begins with Maker playing edges in up to l(p + 1) − 1 different strips, possibly making them invalid. From here we split the game in to a number of different phases. We will show by induction on k that for each k = 0, 1, . . . , T, at the start of phase k it will be Breaker's turn to play and the number of (k, 0)-valid strips will be at least (s(p + 1)) T−k . As noted above, our inductive statement is clear when k = 0. Suppose the statement is true for k. On each turn in phase k, Breaker will choose l different (k, 0)-valid strips and play q edges in each, turning them into (k + 1, p)-valid strips. Maker can now distribute their l(p + 1) − 1 among all the strips as she likes. In the worstcase scenario, each edge that Maker plays can either turn a strip that is exactly (k + 1, j)-valid into one that is exactly (k + 1, j − 1)-valid (when j 1), or turn a (k, 0)-valid or (k + 1, 0)-valid strip into an invalid one. For each j = 0, 1, . . . , p, let R t (j) = R t (j, k) be the number of exactly (k + 1, j)valid strips on the board after a total of t combined edges in round k have been played by the two players. Moreover, let R t = R t (k) be given by R t = p j=0 (j + 1)R t (j). We have that, if after t edges have been played it is Breaker's turn to play and he plays q edges, then R t+q = R t + p + 1. On the other hand, if after t turns it is Maker's turn to play, then we have that R t+1 R t − 1. As Breaker plays a total of lq edges while Maker plays a total of l(p + 1) − 1 edges on their respective turns for a combined total of s edges, we have that R rs r for all r ∈ Z 0 , at least until phase k ends. Breaker decides that phase k has finished and phase k + 1 has begun when R rs (p + 1)((p + 1)s) T−k−1 for some r ∈ Z 0 . Note that after Maker and Breaker have both completed their turns, the number of (k, 0)-valid strips has decreased by at most s. Thus, as the number of (k, 0)-valid strips at the start of phase k is at least (s(p + 1)) T−k , we know that the number of (k, 0)-valid strips for Breaker to play in will not run out before R rs (p + 1)((p + 1)s) T−k−1 . As R rs (p + 1)((p + 1)s) T−k−1 , we have that the number of (k + 1, 0)-valid strips at the start of phase k + 1 is at least ((p + 1)s) T−k−1 and it is Breaker's turn to play, as required.
To finish the proof, we note that at the start of phase T, there is at least one strip S i that is (T, 0)valid, and has been obtained by Breaker following a winning strategy on this strip for the (p, q)-Shannon switching game under the BF rules. As Breaker can win the (p, q)-Shannon switching game on S i in at most T moves, he has in fact won the local (p, q)-Shannon switching game on this strip S i , and with it the global (l(p + 1) − 1, lq)-Shannon switching game on (G, A, B).
Remark 5.4. The bound on m given in Theorem 5.3 is precisely the bound on the number of strips m 0 /(n + 1) given in the proof of Theorem 1.2: simply substitute in the values p = q = 1 corresponding to the powers of the players in the local game on the strips and replace l with q in the bound for m to recover the bound on m 0 /(n + 1). Note in particular that q plays a different role in statements of the two theorems. Just as we have been able to generalize our winning Breaker strategy for the (2q − 1, q)-crossing game to other Shannon game-triples, we believe our winning Maker strategy for the (2q, q)crossing game on S m×(q+1) , as described in the proof of Theorem 1.1, can be adapted to a number of other planar lattices. The key idea here is that if is a planar lattice where an isoperimetric inequality similar to that of Lemma 2.3 holds, then a Maker strategy similar to that in the proof of Theorem 1.1 should work in . More precisely, suppose is a planar lattice such that there exist a constant a such that, for all k ∈ N and for all connected components C comprised of k edges, the dual boundary cycle to C consists of at most ak + (a + 2) edges. 1 In this case we believe that there exists some constant c such that Maker has a winning strategy for (aq, q)-crossing games on all arbitrarily long substrips of of 'width' at least c. Of course, modifying our proof of Theorem 1.1 to adapt it to a given planar lattice will require a careful definition of brackets and a large amount of case-checking (as is already the case in the proof of Theorem 1.1 itself), so we make no attempt to do so here.
Given our original motivation from percolation theory, it would be natural to study Shannon switching games on strips of any of the standard two-dimensional lattices studied in percolation. For instance, who wins crossing games on 'rectangular-shaped' subgraphs of the triangular, honeycomb or Kagome lattices? More generally, this is a natural problem for any of the 11 Archimedean lattices.
In a different direction, one could consider site-percolation rather than bond-percolation, by playing variants of our generalized Shannon switching games where the players take turns claiming vertices rather than edges. One famous example of such a game is the game of Hex, where the players take turn claiming vertices on a subset of the triangular lattice, both trying to create certain crossing paths. It is easy to prove that a vertex-analogue of Lemma 2.3 holds in this lattice i.e. that any set of k vertices inducing a connected component of the triangular lattice can be surrounded by a bounding cycle consisting of at most 2k + 4 vertices -and we guess that our Maker winning strategy for the (2q, q)-crossing game should carry over without excessive technicalities (but not without care and case-checking).

Concluding remarks
There are many questions arising from our work. Outside of the special cases (p, q) = (1, 1), p 2q and p q/2, the problem of determining which of Maker or Breaker has a winning strategy for the (p, q)-crossing game on S m×n is completely open for pairs (m, n) that fall outside the scope of Theorem 1.2 and Propositions 2.4 and 2.5. Resolving this seems an obvious (but challenging) problem. Problem 6.1. Given natural numbers p, q, n ∈ N, determine the greatest m ∈ N such that Maker has a winning strategy for the (p, q)-crossing game on S m×n .
As a special, easier case, one could consider the following problem of determining the optimal value of m in the variant of the (1, 1)-crossing game where Maker gets an extra edge every M turns. It is not hard to show Maker can win in this variant for some m = n + ( log n), and it would be very interesting to determine whether she has a winning strategy for m = (1 + ε)n for some constant ε = ε(M) > 0.
In a similar spirit, setting one's sights slightly lower than Problem 6.1, one could try to prove that having extra power allows one to win on a significantly longer board. Conjecture 6.3. The following hold.
(i) For every q ∈ N, there exists ε > 0 such that, for all n sufficiently large, Maker wins the (q + 1, q) game on S (1+ε)n ×n . (ii) For every p ∈ N, there exists ε > 0 such that, for all m sufficiently large, Breaker wins the (p, p + 1) game on S m× (1+ε)m .
An even more basic problem is showing that when the powers are balanced, Breaker should win on a narrower board, overcoming Maker's first-player advantage. In a different direction, one may ask for optimal bounds on m in Theorem 1.2. Question 6.5. Let n, q ∈ N. What is the smallest m 0 = m 0 (n, q) such that Breaker wins the (2q − 1, q)-crossing game on S m 0 ×n ? In particular, for q fixed, is m 0 (n, q) sub-exponential in n?
A related question, which would help improve the bounds on m for the Breaker strategy we developed in the proof of Theorem 1.2, is the following. Question 6.6. Under perfect play, how long does a game of Bridg-it last?
We make no attempt to answer this question here, but we believe it may be possible to shed some light on the answer through careful analysis of the Maker-win strategy recorded in Theorem 2. 9.
In yet another direction, efforts to apply the biased Erdős-Selfridge [1,12] criterion to Problem 6.1 leads to some intriguing questions on weighted sums over crossing paths, connected to the study of fugacity in statistical physics and to problems in analytic combinatorics (see e.g. [9]). Explicitly, let H (m, n) denote the collection of all left-to-right crossing paths in the rectangle S m×n . Given a path π ∈ H (m, n), let (π) denote its length. Then the biased Erdős-Selfridge criterion due to Beck implies that if π∈H (1 + q) − (π)/p < 1 1 + q , (6.1) then Breaker has a winning strategy for the (p, q)-crossing game on S m×n . In particular, suppose m = ρn for some ρ > 0, and that we knew that, as n → ∞, the number of crossing paths of S m×n of length grew no faster than (λ ρ + o(1)) , for some ρ-dependent constant λ ρ (this would be a 'crossing path' analogue of the connective constant familiar from the study of self-avoiding walks). Then (6.1) would imply that Breaker has a winning strategy whenever λ ρ < (1 + q) 1/p . If for some ρ < 3 the value of λ ρ were found to be sufficiently small so that λ ρ < 2, this would show that Breaker wins the (2, 3)-crossing game on S m×n for all n sufficiently large, giving a non-trivial improvement on what we know about that game. Of especial interest would be the case ρ = 1 - one would guess that Breaker's extra power in the (2, 3)-crossing game would allow him to win on S n×n , say, but we have currently no proof of even this weakening of Conjecture 6.3 (ii). Finally, variants of our games on other lattices or where the players claim vertices rather than edges, as discussed in Section 5, are both interesting and almost completely open.