MULTIPLICATION TABLES AND WORD-HYPERBOLICITY IN FREE PRODUCTS OF SEMIGROUPS, MONOIDS AND GROUPS

Abstract This article studies the properties of word-hyperbolic semigroups and monoids, that is, those having context-free multiplication tables with respect to a regular combing, as defined by Duncan and Gilman [‘Word hyperbolic semigroups’, Math. Proc. Cambridge Philos. Soc. 136(3) (2004), 513–524]. In particular, the preservation of word-hyperbolicity under taking free products is considered. Under mild conditions on the semigroups involved, satisfied, for example, by monoids or regular semigroups, we prove that the semigroup free product of two word-hyperbolic semigroups is again word-hyperbolic. Analogously, with a mild condition on the uniqueness of representation for the identity element, satisfied, for example, by groups, we prove that the monoid free product of two word-hyperbolic monoids is word-hyperbolic. The methods are language-theoretically general, and apply equally well to semigroups, monoids or groups with a 
$\mathbf {C}$
 -multiplication table, where 
$\mathbf {C}$
 is any reversal-closed super- 
$\operatorname {\mathrm {AFL}}$
 . In particular, we deduce that the free product of two groups with 
$\mathbf {ET0L}$
 with respect to indexed multiplication tables again has an 
$\mathbf {ET0L}$
 with respect to an indexed multiplication table.

second, and somewhat more robust, approach is to use the methods of formal language theory, which is the starting point of the present article.
Language-theoretic methods in group theory have a rich history spanning the past half century, starting with Anīsīmov in 1969 [7] and continuing with the seminal [8].It is in this latter article in which the 'word problem' of a group, being the formal language of words over a generating set representing the identity element, was introduced and studied.The connections between groups and context-free languages were explored further by Anīsīmov [4-6, 9, 10].Muller and Schupp [86] (contingent on a weak form of a deep result by Dunwoody [39]) subsequently proved a striking classification: a finitely generated group has context-free word problem if and only if it is virtually free.This decisively demonstrated the depth of the connection first uncovered by Anīsīmov.With the importance of context-free languages to the theory of groups, it seems natural to desire a purely language-theoretic definition of hyperbolicity in groups.One such definition was given by Grunschlag [63], who proved that a group is hyperbolic if and only if its word problem is generated by a terminating growing context-sensitive grammar.(A terminating grammar is one in which for every variable v, there is a sequence of productions that transforms v into a word over the terminals; see [63,Section 1.8.5].)An arguably more elegant characterisation, using the weaker expressive power of context-free languages, was given by Gilman [49].This definition has the added benefit of being generalisable directly to semigroups, which was done by Duncan and Gilman [38].To distinguish from the geometric variant, this form of hyperbolicity is called word-hyperbolicity.Loosely speaking, a semigroup S is word-hyperbolic if there exists a regular language of representatives (with no requirement of uniqueness) such that the multiplication table for S with respect to this language can be described by a context-free language.This definition (which is described formally in Section 1.3) is equivalent to geometric hyperbolicity for groups [38,Corollary 4.3] and for completely simple semigroups [46,Theorem 4.1].In general, however, word-hyperbolic semigroups are a more restricted class than hyperbolic semigroups, and appear somewhat more amenable to general results than the geometric approach to hyperbolic semigroups.(Having said this -and as a paper on hyperbolicity of semigroups may be considered to be skewed without some references to the geometry of semigroups -there is a recent trend, pioneered by Gray and Kambites, in successfully handling the directed geometry of semigroups in a manner extending the usual geometric group theory (for example, the Milnor-Schwarz lemma [103]), see [54][55][56][57][58]; see also [47,68,76,101].)For example, just as in groups, there are links between word-hyperbolicity and automaticity in semigroups.
Whenever a generalisation (for example, of hyperbolicity in groups) is made, it is useful to ask: what properties should be desired to be retained, and which should not?For example, hyperbolicity in groups is independent of the generating set chosen, which is a rather (one may argue) essential and desirable property; this property holds also for word-hyperbolic semigroups [38,Theorem 3.4].Furthermore, the word problem is well known to be decidable in all hyperbolic groups (in linear time); for word-hyperbolic semigroups, the word problem is also decidable [67,Theorem 3.8], in fact in polynomial time [26,Theorem 7.1].
However, while hyperbolic groups are automatic [44,Theorem 3.4.5], it is not true that word-hyperbolic semigroups are always automatic [67,Example 7.7].Similarly, while the isomorphism problem is decidable for hyperbolic groups [34], it is undecidable in general for word-hyperbolic semigroups [26,Theorem 4.3].When considering which properties are desired -and are reasonable to desire -if a definition is found to not satisfy one such desired property, then an amendment to the definition which forces this property to hold may be considered.(In fact, two amendments have already been proposed, by Hoffmann and Thomas [69], which recovers automaticity, and Cain and Pfeiffer [26], which has a word problem decidable in O(n log n)-time.)It is the view of the author that free products of word-hyperbolic semigroups ought to be word-hyperbolic; free products are free constructions, and free objects (for example, free semigroups) are word-hyperbolic.While we are not able to prove this result with exactly this statement, the main results of this article demonstrate that any possible counterexample to the general statement will be exceptional, rather than the norm.
The outline of the paper is as follows.In Section 1, we give some background, necessary definitions and notation.In particular, in Section 1.8, we give a brief overview of the connections between substitutions in formal language theory (crucial to the arguments in subsequent sections) and ET0L systems.In Section 2, we prove the main result for semigroup free products, which is the following theorem.THEOREM A. Let S 1 , S 2 be 1-extendable word-hyperbolic semigroups.Then the free product S 1 * S 2 is word-hyperbolic.
The technical condition for a word-hyperbolic semigroup S to be 1-extendable is defined and explored in Section 1.3, and loosely speaking consists of a condition ensuring that the word-hyperbolic structure for S does not collapse if one adjoins an identity to S. In particular (see Lemma 1.6), any monoid and any (von Neumann) regular semigroup is 1-extendable, so Theorem A applies when S 1 and S 2 are from either of these classes.
To deal with monoid free products (and, as a particular case, group free products), in Section 3, we first develop some technical purely language-theoretic tools which we call polypartisan ancestors.Loosely speaking, polypartisan ancestors model a form of sequential rewriting with respect to rules of the form a → W, where a is a single letter, and where the word w to which the rewriting is applied is divided into some fixed number k parts such that each part is rewritten using possibly different sets of rules.In Section 4, we first use our previous results on semigroup free products to prove the following main result.THEOREM B. Let M 1 , M 2 be two word-hyperbolic monoids with 1-uniqueness (with uniqueness).Then the monoid free product M 1 * M 2 is word-hyperbolic with 1-uniqueness (with uniqueness).

Introduction and notation
The paper also assumes familiarity with the basics of the theory of semigroup, monoid and group presentations, which is written as Sgp A | R , Mon A | R and Gp A | R , respectively.For further background, see, for example, [1,27,82,83,87].
1.1.Formal language theory.We assume the reader is familiar with the fundamentals of formal language theory.In particular, a full AFL (abstract family of languages) is a class of languages closed under homomorphism, inverse homomorphism, intersection with regular languages, union, concatenation and the Kleene star.Furthermore, a class C is reversal-closed if for all L ∈ C, we have L rev ∈ C. Here, L rev denotes the language of all words in L read backwards (see Section 1.2 for a formal definition).For some background on this, and other topics in formal language theory, we refer the reader to standard books on the subject [20,64,70,96].The class of context-free languages is denoted CF.
We also, in Sections 5 and 1.8, make reference to the class IND of indexed languages.The latter was introduced in Aho's Ph.D. thesis [2], see also [3] as an extension of the context-free languages; we refer the reader to, for example, [48,65,102] or [70,Ch. 14] for particularly readable definitions.Finally, we make some reference to the classes ET0L and EDT0L in Section 5 (but not in the main sections of the paper).These are examples of L-languages, which arise from L-systems.The theory of L-systems originated in 1968 in the work of Lindenmayer [80,81] (whence the L) as a theory for the parallel branching of filamentous organisms in biology, but subsequently grew into a core branch of formal language theory [66,[93][94][95].Because of this vast literature (and as we do not need the definitions), we do not define either ET0L or EDT0L, instead referring the reader to more recent articles on the subject (for example, especially [31], see also [24,72,92]).The research topic remains very active; particularly, the connections between ET0L and EDT0L languages and equations over groups and monoids have flourished in recent years; see, for example, [30,31,35,45].There are also recent links with geometric group theory.For example, Bridson and Gilman [23] famously proved that any 3-manifold group admits a combing which is an indexed language; in fact, their combing is an ET0L language [31] (note that

CF ET0L IND).
A useful analogy to keep in mind is the following: the class CF can be seen as modelling closure under sequential recursion; the class ET0L models closure under parallel recursion.See Section 1.8 for further details on this analogy, particularly Theorem 1.13, as well as [97].Finally, CF, ET0L and IND are all easily seen to be reversal-closed.

Rewriting systems.
Let A be a finite alphabet, and let A * denote the free monoid on A, with identity element denoted ε or 1, depending on the context.Let A + denote the free semigroup on A, that is, A + = A * − {ε}.For u, v ∈ A * , by u ≡ v, we mean that u and v are the same word.For w ∈ A * , we let |w| denote the length of w, that is, the number of letters in w.We have then we denote this u = M v.By M rev , we mean the reversed monoid That is, M and M rev are anti-isomorphic (if G is a group, then clearly G G rev ).Finally, when we say that a monoid M is generated by a set A, we mean that there exists a surjective homomorphism π : A * → M. We use analogous terminology for semigroups and groups.
We give some notation for rewriting systems.For an in-depth treatment and further explanations of the terminology, see, for example, [21,22,73].A rewriting system R on A is a subset of A * × A * .We denote rewriting systems by script letters, for example, R, S , T .An element of R is called a rule.The system R induces several relations on A * .We write u − → R v if there exist x, y ∈ A * and a rule ( , r) ∈ R such that u ≡ x y and v ≡ xry.We let − → * R denote the reflexive and transitive closure of − → R .We denote by * ← → R the symmetric, reflexive and transitive closure of − → R .The relation * ← → R defines the least congruence on A * containing R. For X ⊆ A * , we let ∇ * R (X) denote the set of ancestors of X with respect to R, that is, Every special system is monadic.Let C be a class of languages.A monadic rewriting system R is said to be C if for every a ∈ A ∪ {ε}, the language {u | (u, a) ∈ R} is in C. Thus, we may speak of, for example, C-monadic rewriting systems or context-free monadic rewriting systems.Monadic rewriting systems are extensively treated in [21].DEFINITION 1.1.Let C be a class of languages.Let R ⊆ A * × A * be a rewriting system.Then we say that R is C-ancestry preserving if for every L ⊆ A * with L ∈ C, we have ∇ * R (L) ∈ C. If every C-monadic rewriting system is C-ancestry preserving, then we say that C has the monadic ancestor property.
The terminology monadic ancestor property was introduced by the author in [91], and also appears in [89,90], but was treated implicitly already in [21,73], see especially [73,Lemma 3.4].The idea of defining classes of languages via ancestry in rewriting systems is not new, and can be traced back at least to, for example, McNaughton et al.'s Church-Rosser languages [84] or Beaudry et al.'s McNaughton languages [17,18].EXAMPLE 1.2.If R ⊆ A * × A * is a context-free monadic rewriting system, and L ⊆ A * is a context-free language, then ∇ * R (L) is a context-free language [21, Theorem 2.2].That is, every CF-monadic rewriting system is CF-ancestry preserving.Hence, the class of context-free languages has the monadic ancestor property.
Having the monadic ancestor property is analogous to being closed under sequential recursion; see Section 1.8 for further elaboration on this.This gives rise to the notion of a super-AFL.DEFINITION 1.3.Let C be a full AFL.Then C is said to be a super-AFL if it has the monadic ancestor property.
Hence, by Example 1.2, CF is a super-AFL.For the main body of the text, this is the only super-AFL we deal with; see, however, Section 1.8 for a broader discussion, and Section 5 for generalisations of our results to all reversal-closed super-AFLs.The primary reason for dealing only with context-free languages comes from the importance of CF with regards to word-hyperbolicity.
1.3.Word-hyperbolicity.Let S be a semigroup, finitely generated by some set A, with associated surjective homomorphism π S : A + → S. Let R ⊆ A + be a regular language.If π S (R) = S, that is, every element of S is represented by some word from R, then we say that R is a regular combing of S. If π S is bijective when restricted to R, then we say that R is a regular combing with uniqueness.Let # 1 , # 2 be two new symbols, and let We say that T R (S) is a multiplication table for S (with respect to R).If this table is context-free, that is, if T S (R) ∈ CF, then we say that S is a word-hyperbolic semigroup (with respect to the combing R).If R is additionally a combing with uniqueness, then we say that T S (R) is word-hyperbolic with uniqueness.Not every word-hyperbolic semigroup is word-hyperbolic with uniqueness [25].
The above notion of hyperbolicity was introduced by Duncan and Gilman [38].One can show that if S is hyperbolic with respect to one choice of finite generating set, then it is hyperbolic with respect to every such choice [38,Theorem 3.4].However, note that even if T S (R 1 ) ∈ CF for some regular combing R 1 , there may still be some regular combing R 2 of S such that T S (R 2 ) CF.For extensions of the condition T S (R) ∈ CF to, for example, T S (R) ∈ ET0L or T S (R) ∈ IND, see Section 5.
We extend this definition in the obvious way to monoids (and groups) by substituting A * for A + .Thus, a monoid M generated by A is word-hyperbolic 'as a monoid' if and only if there exists a regular combing R ⊆ A * such that T M (R) ∈ CF.However, by [38,Theorem 3.5], a monoid is word-hyperbolic 'as a monoid' if and only if it is word-hyperbolic as a semigroup (in the above sense).We therefore speak of 'word-hyperbolic monoids' always referring to a regular combing R ⊆ A * .In fact, it is not difficult to see, by using a rational transduction, that if M is word-hyperbolic with respect to a combing R ⊆ A + , then it is word-hyperbolic with respect to R ∪ {ε} (see, for example, the first paragraph in the proof of Lemma 1.5).We may thus assume without loss of generality that any regular combing for M includes the empty word (which necessarily represents the identity element).If M is word-hyperbolic with respect to the combing R, and the only word in R representing the identity element of M is the empty word, then we say that M is word-hyperbolic with 1-uniqueness.
One can show that a group is word-hyperbolic if and only if it is hyperbolic in the usual sense, that is, the sense of Gromov [38,Theorem 4.3].Furthermore, one can show that, due to the Muller-Schupp theorem, if G is a group generated by A, then T G (A * ) is context-free if and only if G is virtually free [49, Theorem 2(2)], a condition which is significantly stronger than hyperbolicity.Indeed, more generally, it is not difficult to see that a semigroup S is word-hyperbolic with respect to the combing A + if and only if S has a context-free word problem (in the sense of Duncan and Gilman [38,Section 5]).
For brevity, for i = 1, 2, we let 1.4.1-extendability.We now define a slightly technical condition, which proves useful in Section 2. Let S be a semigroup.We define S 1 to be the semigroup with an identity 1 adjoined, regardless of whether S has an identity element already or not.(If S is a monoid, then defining S 1 in this manner (rather than simply taking S 1 = S) is only a technicality, but is used to avoid some other language-theoretic technicalities.)DEFINITION 1.4 (1-extendable).Let S be a word-hyperbolic semigroup with respect to a regular combing R ⊆ A + .We say that S is 1-extendable if S 1 is word-hyperbolic with respect to the regular combing R ∪ {ε}.
Thus, if S is a 1-extendable word-hyperbolic semigroup, then S 1 is word-hyperbolic.We do not know if the converse holds in general.Our main interest in 1-extendability is in the statement of Theorem A, in which we show that the free product of 1-extendable word-hyperbolic semigroups is again word-hyperbolic.We begin by showing that 1-extendability is not particularly elusive.LEMMA 1.5 (Kambites).Let S be a word-hyperbolic semigroup.If every element of S has a right stabiliser (that is, for every s ∈ S, there exists some t ∈ S with st = s), then S is 1-extendable.
PROOF.Suppose S is generated by the finite set A, with R ⊆ A + a regular combing and T S (R) context-free.As noted by Duncan and Gilman [38,Question 1], to show that S 1 is word-hyperbolic, it suffices to show that the language that is, it is a union of T S (R) and languages obtainable from Q by a rational transduction, and hence also context-free.
For every a ∈ A, let a ∈ R be a word such that aa = S a, that is, a right stabiliser for a, which exists by assumption.Let A = {a | a ∈ A}.Then for every u ∈ R, say u ≡ a 1 a 2 • • • a n , we have that ua n = S u.By partitioning R based on the final letters of words (which is well defined as ε R), we find that the language is a regular language, being a finite union of (pairwise disjoint) regular languages.Now, This latter language L is just given by Hence, as CF is closed under union and intersection with regular languages, we have that L ∈ CF if and only if for all a ∈ A, L a ∈ CF.As T S (R) ∈ CF, and U# 2 R rev is regular, we have L ∈ CF, and thus also L a ∈ CF for all a ∈ A.
For every a ∈ A, let a be the rational transduction of L a defined by deleting # 1 a # 2 in the input word and replacing it by # in the output word, and fixing all other parts of the input word in L a .Then, As CF is closed under rational transduction, we have L a ∈ CF for all a ∈ A. However, clearly As CF is closed under finite unions, we have Q ∈ CF, as was to be shown.
The author thanks Mark Kambites for suggesting Lemma 1.5 and its proof.We rephrase the above result to our current situation of 1-extendability, and note the following direct consequence.LEMMA 1.6.Let S be a word-hyperbolic semigroup.If S is either: (1) a monoid; In particular, we find that hyperbolic groups are 1-extendable.On a philosophical note, we remark (for reasons not too dissimilar from those which are discussed in Section 4.3) that 1-extendability strikes the author as a very natural condition for working with word-hyperbolic semigroups in the first place.
1.5.Semigroup free products.Semigroup free products can be found described in, for example, [71,Ch. 8.2].We give an overview of this theory here, with some additional terminology that simplifies later notation.
Let S 1 , S 2 be semigroups defined by We identify S with the semigroup whose elements are all finite nonempty alternating sequences (s 1 , s 2 , . . ., s n ) of elements s i ∈ S 1 ∪ S 2 , where alternating means that s i and s i+1 come from different factors for 1 ≤ i < n.We write s i ∼ s j if s i and s j come from the same factor, and s i s j otherwise.We always have s i ∼ s i and s i s i+1 .Given any nonempty sequence s = (s 1 , s 2 , . . ., s n ) with s i ∈ S 1 ∪ S 2 , we define the alternatisation s of s to simply be s = s if s is alternating; otherwise, if, say, s i ∼ s i+1 , we define s as the alternatisation of (s 1 , . . ., s i • s i+1 , . . ., s n ).Clearly, the alternatisation of s is a uniquely defined alternating sequence.
The product of two alternating sequences in S is given by the alternatisation of the concatenation of the sequences.That is, explicitly, multiplication in S is given by See, for example, [71, Eq. (8.2.1)].Note that, in particular, the semigroup free product of two monoids is never a monoid.We now define monoid free products in a similar manner.
1.6.Monoid free products.Let M 1 , M 2 be the monoids We identify M with the monoid whose elements are all finite reduced alternating sequences where reduced means that m i 1 for all 1 ≤ i ≤ n.Given an alternating sequence s = (s 1 , s 2 , . . ., s n ), we define the reduction s of s to be s if s is already reduced; and, otherwise, define s to be the reduction of the alternatisation of the subsequence (s i 1 , s i 2 , . . ., s i k ) consisting of precisely those s i j that satisfy s i j 1.
Clearly, the reduction s of s is a uniquely defined reduced alternating sequence.The product of two reduced sequences in M is then defined as the reduction of the concatenation of the sequences.Hence, similar to Equation (1-1), we easily find an explicit expression for multiplication of elements in a monoid free product as (1-2) See, for example, [71, page 266].Unlike the case of the semigroup free product, the empty sequence is always an identity element for M, so the free product of two monoids is always (obviously) a monoid.We also remark on the recursive definition of multiplication in the third case of Equation (1-2).We may, of course, have that s n−1 • t 2 = 1, in which case we continue reducing.In particular, the monoid free product of two groups is a group, and hence the monoid free product of two groups coincides with the usual group free product of the same groups.1.7.Alternating words and combings.We make the following definition of alternating words, which is useful in describing the language theory of free products.Let R 1 , R 2 be regular languages over some alphabets A 1 , A 2 , respectively, with Then we can factorise w -not necessarily uniquely!-as a product w ≡ x 1 x 2 • • • x n , where for every 1 ≤ i ≤ n, we have for some i, then we write this as x i ∼ x i+1 (context will always make this slightly abusive notation clear).If X is such that x i x i+1 , that is, X(i) X(i + 1), for all 1 ≤ i < n, which is to say that X is a standard parametrisation when restricted to {1, . . ., n}, then we say that the factorisation x 1 x 2 • • • x n of w is alternating.In this case, we may without loss of generality assume X is a standard parametrisation.
If w admits an alternating factorisation, then we say that w is an (R 1 , R 2 )-alternating word (or simply alternating word, if context makes the regular languages R 1 , R 2 clear).It is clear that w admits at most one alternating factorisation, and hence, if w is an alternating word, then we may speak of the alternating factorisation of w, with associated standard parametrisation X.We for convenience always also say that the empty word is alternating, with the 'unique' alternating factorisation ε then we simply for convenience choose ε ∈ R 1 ).Note that not every factorisation as a word over (R 1 ∪ R 2 ) + of an alternating word is alternating: for example, if R 1 = {x, xx} and R 2 = {y}, then the word xxy can be factorised as either x • x • y or xx • y as a word over (R 1 ∪ R 2 ) + ; only the latter of the two factorisations is alternating.
The language of all (R 1 , R 2 )-alternating words is regular, being the language We denote the language in Equation (1-3) as Alt(R 1 , R 2 ).We denote by Alt + (R 1 , R 2 ) the language Alt(R 1 , R 2 ) − {ε} of nonempty alternating words.LEMMA 1.7.Let S 1 , S 2 be two semigroups, finitely generated by disjoint sets A 1 , respectively A 2 , and with regular combings R 1 , respectively R 2 .Then the language Alt + (R 1 , R 2 ) is a regular combing of the semigroup free product S = S 1 * S 2 .
PROOF.Let (s 1 , s 2 , . . ., s k ) ∈ S be an alternating sequence, with associated parametrisation X, that is, so that s i ∈ S X(i) for all 1 ≤ i ≤ k.For every 1 ≤ i ≤ k, there is some , we have the result.Now the following follows immediately from Lemma 1.7 and standard normal form lemmas for semigroup free products.LEMMA 1.8.Let S 1 and S 2 be as in Lemma 1.7.Let S = S 1 * S 2 denote their semigroup free product, and let u, v are the unique alternating factorisations of u and v, respectively, and with associated standard parametrisations X, respectively Y.Then, u = S v if and only if Finally, we give an explicit expression for how multiplication works in semigroup free products with respect to the combing Alt + (R 1 , R 2 ).For brevity, we let R = Alt + (R 1 , R 2 ) and S = S 1 * S 2 .LEMMA 1.9.Let x ≡ x 1 x 2 • • • x n ∈ R be an alternating product such that x i ∈ R X(i) for some standard parametrisation X.Let w 1 , w 2 ∈ R be such that w 1 • w 2 = x in S. Then one of the following holds.
(1) For some 0 ≤ k ≤ n, we have where x j ∈ R X( j) and x j = x j in S X( j) for all 0 ≤ j ≤ n.
(2) For some 0 ≤ k ≤ n, we have , and x j ∈ R X( j) with x j = x j in S X( j) for all 0 ≤ j < k and k < j ≤ n.
PROOF.This follows directly from Lemma 1.8 and the multiplication in Equation (1-1) in semigroup free products; case (1) corresponds to the first case of Equation (1-1), and case (2) corresponds to the second.
We give a similar treatment regarding combings and monoid free products.Let M 1 and M 2 be two monoids, generated by two finite disjoint sets A 1 , respectively A 2 .Let M = M 1 * M 2 denote their monoid free product, and let S denote their semigroup free product.(To emphasise just how different M and S are, we note that S is always (!) an infinite semigroup, even if M 1 and M 2 are trivial monoids, whereas in this latter case, M would simply be trivial.)We let The empty word is also declared to be reduced.Just as in the case of semigroup free products (Lemma 1.7), it is easy to see that if R 1 , R 2 are regular combings of M 1 , respectively M 2 , then Alt(R 1 , R 2 ) is a regular combing of M. We write, as before, R = Alt(R 1 , R 2 ).We have the following simple structural lemma, based on the identification of M with the semigroup free product of M 1 by M 2 amalgamated over the trivial submonoid (see, for example, [71, page 266]).Of course, this lemma would fail spectacularly if the reduced condition is removed.Despite this connection between M and S, there is one important distinction to make from the semigroup free product case: when multiplying the alternating word a monoid free product, if we are in the case u k ∼ v 0 , we may, of course, have u k v 0 = M i 1 for i = 1 or 2. Unlike the case of semigroup free products, this now means that u k v 0 = M 1.Hence, the [13] multiplication table for M with respect to the regular combing R is mostly made up of the multiplication table for S, but with one additional case.We spell the above out in somewhat more technical language.LEMMA 1.11.Let x ≡ x 1 x 2 • • • x n ∈ R be reduced, with x i ∈ R X(i) for some standard parametrisation X.Let w 1 , w 2 ∈ R be reduced with w 1 • w 2 = x in M. Then one of the following holds.
(1) For some 0 ≤ k ≤ n, we have where x j ∈ R X( j) and x j = x j in M X( j) for all 0 ≤ j ≤ n.
(2) For some 0 ≤ k ≤ n, we have , and x j ∈ R X( j) with x j = x j in M X( j) for all 0 ≤ j < k and k < j ≤ n.
(3) For some k ≥ 0 and m ≥ n, we have Cases (1) and (2) are 'inherited' from S by combining Lemmas 1.9 and 1.10 in the case that the concatenation w 1 w 2 is reduced, while case (3) corresponds to case (3) in Lemma 1.10.This case (3) highlights the recursive nature of reduction in free products (see, for example, free reduction), and this recursion eventually terminates as |w j | < |w j | for j = 1, 2. We give an example of an application of Lemma 1.11 below, in the case of the free product of two copies of the bicyclic monoid.EXAMPLE 1.12.Let M i = Mon b i , c i | b i c i = 1 for i = 1, 2 be two copies of the bicyclic monoid, and let x, in M, so we can apply Lemma 1.11.Indeed, we find that we are in case (3), taking k = 2 and m = 3, These are all the statements we require about free products in the sequel.
1.8.ET0L and substitutions.Word-hyperbolicity is connected with CF-multiplication tables.However, our results are true more generally, substituting, for example, ET0L or IND for CF, and we elaborate on this topic in Section 5. Specifically, the proofs of the main results about preservation properties in free products of word-hyperbolic algebraic structures (semigroups, monoids, or groups) in Sections 2 and 4 are all applicable to free products of algebraic structures with C-multiplication tables, where C is some full AFL satisfying the monadic ancestor property.This includes the cases when C is one of CF, IND or ET0L.We give a brief overview of the strong historical connections between ET0L and the monadic ancestor property.This is a complex history; we cannot do it full justice here, and it will be expanded on in a future survey article.
We give the definition of a substitution.Let A be an alphabet.For each a ∈ A, let σ(a) be a language (over any finite alphabet); let σ(ε) = {ε}; for every x, y ∈ A * , let σ(xy) = σ(x)σ(y); and for every L ⊆ A * , let σ(L) = w∈L σ(w).We then say that σ is a substitution.For a class C of languages, if for every a ∈ A we have σ(a) ∈ C, then we say that σ is a C-substitution.Let A be an alphabet, and σ a substitution on A. For every a ∈ A, let A a denote the smallest finite alphabet such that σ(a) ⊆ A * a .Extend Then we say that σ ∞ is an iterated substitution.If for every b ∈ A ∪ ( a A a ) we have b ∈ σ(b), then we say that σ ∞ is a nested iterated substitution.Note that every nested iterated substitution is, of course, an example of an iterated substitution.If σ ∞ is nested, then it is convenient for inductive purposes to set σ 0 (L) := L. Note that the nested property ensures L ⊆ σ(L), so n≥0 σ n (L) = n>0 σ n (L).We say that C is closed under nested iterated substitution if for every C-substitution σ and every L ∈ C, we have: if σ ∞ is a nested iterated substitution, then σ ∞ (L) ∈ C. A similar definition yields closure under iterated substitutions.For the benefit of the reader, we mention two facts that can be useful to keep in mind, expanded on below: the class CF is closed under nested iterated substitution (but not iterated substitution), and the class ET0L is closed under iterated substitution (and hence also nested iterated substitution).
Substitutions are closely related to AFLs.Indeed, the 1967 article by Greibach and Ginsburg which first defined AFLs [50] (later expanded in [51]) included a proof about a form of substitution-closure for AFLs (under ε-free regular substitutions) and for full AFLs (under regular substitutions).The closure of CF under nested iterated substitution was proved by Král [75] in 1970.Following some further results (for example, [52]), an abstract basis for substitution was developed by Ginsburg and Spanier [53]; one particular important notion developed there was treating the (nested) substitution-closure of a full AFL as a form of 'algebraic closure'.In particular, it is proved that the substitution-closure of a full AFL is a full AFL [53, Theorem 2.1].Lewis [79] used substitution to define full AFLs, and rediscover the aforementioned result by Ginsburg and Spanier, see [79,Theorem 1.13].See also [12,19].
Substitutions can be useful in studying full AFLs for a number of reasons; for example, one can recover results of Ginsburg and Greibach [52] about principal AFLs, see [79,Corollary 1.21].One can also use substitution-based ideas to produce (see [28,Corollary 4.13]) an infinite strict hierarchy of full AFL-s C i between the classes CF and ET0L, see also [60,78] for related such hierarchies; for similar hierarchies between ET0L ⊂ IND, see [40,43]; and for infinite hierarchies between IND ⊂ CS, see [16,42].
Because of the importance and utility of iterated substitution, Greibach [59] (later expanded in [61]) defined super-AFLs as a full AFL closed under nested iterated substitution (by [90, Proposition 2.2], this is equivalent to the definition of super-AFL as defined in Section 1.2).Not long after, the notion of a hyper-AFL was introduced, being any full AFL closed under iterated substitution [11,99].(Asveld [14, page 1] on this point says the following: 'Similar as in ordinary algebra -where one went from groups to semigroups, rings, and fields -full AFLs gave rise to weaker structures (full trios, full semiAFLs) and more powerful ones: full substitution-closed AFLs, full super-AFLs, and full hyper-AFLs'.We cannot agree with this assessment of the historical development of 'ordinary' algebra.Finite fields and groups were intricately connected already in the early works of both Lagrange and Galois (see [88]), whereas rings and semigroups would not appear as objects of study until half a century, respectively, a century later.Similarly, Klein initially posed an axiomatisation of group as what we today call a monoid, but as Lie 'in his study of infinite groups saw it as necessary to expressly require [the existence of inverses]', it was this axiomatisation that was chosen ('. ..sah sich Lie genötigt, ausdrücklich zu verlangen. ..', [74, page 335]).We strongly recommend the interested reader to consult Wußing [103].The above paragraph shows the difficulty in simplifying the development of ordinary algebra in a linear manner; and one may feel similarly about the linear narrative regarding AFLs.)Many fundamental results about hyper-AFLs and substitution were developed by Christensen [28], who also, along with Asveld [11], fleshed out the connections between ET0L and hyper-AFLs noted by for example Salomaa [98,99] and Čulík [33]; see also [37,41].In particular, at this point, we arrive at the following rather pleasant result.Furthermore, one can also show that IND is a super-AFL [36].As mentioned, the connections between substitutions and ET0L remain active research topics (if somewhat implicitly), but are far too numerous to recount here.While they will be given a proper treatment in the future, we mention a few.For example, one can give a complexity analysis of iterated substitutions, with applications to both ET0L and EDT0L languages [13], and there are connections with fuzzy logic [15].One can also extend the notion of substitution to 'deterministic substitution' (which is not defined here), leading to a statement analogous to the fact that EDT0L is the least dhyper-AFL [12, Corollary 4.5]; see also [77] for more on EDT0L and substitutions.At this point, it bears mentioning that there is a great deal of involved and often obfuscating notation and abbreviations; as an example, we have that 'if K is a pseudoid, then η(K) is the smallest full dhyper-QAFL containing K' [12,Theorem 4.5].In addition, there are a great number of abbreviations for classes of languages associated with Lindenmayer systems (yielding the L); aside from ET0L and EDT0L, we have, for example, L, 0L, P0L, T0L, E0L, X0L, EP0L, FE0L(k), EPT0L, FEPT0L(k), . . .see for example [85] for a large number of these.(Given the number of abbreviations, one may reasonably inquire about the language-theoretic properties of the language of all abbreviations of classes of languages.)We ensure the reader not familiar with this multitude of notation that most, if not all, such classes are generally defined (or definable) by relatively straightforward means; see, for example, the definition of ET0L as given by Theorem 1.13 (2).Furthermore, the reader may notice, in the subsequent sections, the importance of substitution in dealing with free productsthis link between the algebraic and the formal language theoretic runs deep, and there seems to be ample opportunity to develop it further.

Free products of word-hyperbolic semigroups
In this section, we prove the main result regarding semigroup free products and word-hyperbolicity (Theorem A).
Let S 1 , S 2 be two semigroups, finitely generated by disjoint sets A 1 , respectively A 2 , and with regular combings R 1 , respectively R 2 .Let S = S 1 * S 2 denote the semigroup free product of S 1 and S 2 .We begin by recalling (Lemma 1.7) that the language Alt + (R 1 , R 2 ) of alternating words is a combing for S. Let R = Alt + (R 1 , R 2 ).This is, in the following, our chosen combing for proving that the table T S (R) is context-free when the factors S 1 , S 2 are word-hyperbolic.THEOREM A. Let S 1 , S 2 be 1-extendable word-hyperbolic semigroups.Then the free product S 1 * S 2 is word-hyperbolic.PROOF.Suppose, for i = 1, 2, that S i is generated by the finite set A i , and that S i is word-hyperbolic with respect to the regular combing R i ⊆ A + i , with the multiplication table T (R i ) context-free.We assume without loss of generality that A 1 ∩ A 2 = ∅, and hence that R 1 ∩ R 2 = ∅.As S i is 1-extendable, the semigroup S 1  i is word-hyperbolic with respect to the regular combing R i = R i ∪ {ε}, where now ε is the unique word mapping to the identity element For i = 1, 2, define the monadic rewriting system R i by Then, by assumption, R i is a context-free monadic rewriting system.Note that for every x, x ∈ R i with x = S i x , we have that and thus also (x # 1 # 2 x rev , # 2 ) ∈ R i .Let R be the rewriting system R 1 ∪ R 2 .This is also a context-free monadic rewriting system.Recall that for some n ≥ 0, and where for every , where X is some parametrisation.
(⇐=) Suppose w is of the form of Equation (2-1).We prove the claim by induction on n.The case n = 0 is immediate.Suppose n > 0. Then w contains exactly one occurrence of # 2 ; to the left of this occurrence is an occurrence of the word x n # 1 y n , and to the right is an occurrence of the word z rev n .As and the right-hand side now lies in The proof is by induction on k.The base case k = 0 is trivial, for then w ≡ # 1 # 2 .Suppose k > 0. Then there is some and such that the rewriting is via some rule r ≡ (x# 1 y# 2 z rev , # 2 ) ∈ R.Then, as r ∈ R, we have x, y, z ∈ R 1 ∪ R 2 and x • y = z in S 1 j for j = 1 or j = 2. Now, by the inductive hypothesis, with some parametrisation X such that for every 1 ≤ i ≤ m, we have x i y i = z i in S 1 X (i) .As the right-hand side of r contains only one occurrence of # 2 , and as w contains only one occurrence of # 2 , it follows that and hence, taking n = m + 1 and defining the parametrisation X(i) = X (i) for i n, and X(n) = j, the expression in Equation (2-2) is an expression of the form in Equation (2-1) for w.
We now show that a particular rational transduction of the language of all words of the form of Equation (2-1) equals T S (R).This yields the result.Let τ 0 ⊆ A * # × A * # be the rational transduction defined by For any word w ∈ A * # , the language τ 0 (w) consists of all words obtainable by erasing some (possibly zero) amount of # 1 -symbols in w, while fixing all other symbols.Define the language The language L 0 is a context-free language.
PROOF.This is an immediate consequence of the expression in Equation (2-3), in combination with the facts that (i) R is a context-free monadic rewriting system; (ii) every singleton language is in CF; (iii) the class CF has the monadic ancestor property; and (iv) the class CF closed under rational transduction (and hence also, in particular, intersection with regular languages).
We now show that L 0 = T S (R).
PROOF.Suppose w ∈ L 0 .Then, (1) w is an element of τ 0 (w ), where w is of the form of Equation (2-1) (by Lemma 2.1); and (2) w ∈ R# 1 R# 2 R rev .As τ 0 (w ) consists of all words obtainable from w by erasing some number of # 1 -symbols, and the words in R# 1 R# 2 R rev contain exactly one # 1 , it follows from the expression in Equation (2-1) for w that where for every 1 , with X some parametrisation.Now x i ∼ y i for all 1 ≤ i ≤ n.Furthermore, y i x i+1 for all 1 ≤ i < k and k < i ≤ n, as k i=1 (x i y i ) and n i=k+1 (x i y i ) are alternating words.It follows that we must have x i y i ∈ R X(i) for every 1 ≤ i ≤ n and that z i z i+1 for every 1 ≤ i < n except possibly i = k.We thus have two cases: (1) z k z k+1 ; or else (2) z k ∼ z k+1 .In either case, let z i ≡ x i y i for 1 ≤ i ≤ n.Then z i ∈ R X(i) , and w is of the form Suppose we are in case (1).As z i ≡ x i y i , and z i = z i in S 1 X(i) , we thus have that w is the element of the multiplication table T S (R) corresponding to the product In case (2), as , it follows that z k ∼ z k+1 , and hence, as R consists of alternating words, that z k z k+1 ∈ R X(k) .Let z ≡ z k z k+1 , and let z ≡ z k z k+1 .Then, z = S 1 X(k) z.Thus, w is the element of T S (R) corresponding to the product which also clearly holds in S. Thus, in either case, we have that w ∈ T S (R).
We hence have L 0 ⊆ T S (R).We now prove the converse of Lemma 2.3.
where x i ∈ R X(i) for some standard parametrisation X.By Lemma 1.9, we either fall in case (1) or (2) of the same lemma.
In case (1), we have, using the notation of that lemma, that which is what was to be shown.In case (2), the proof is almost the same as in case ( 1), but the reductions are no longer exclusively by rules of the form (x i # 1 # 2 x rev i , # 2 ).In the same way as in case (1), however, we find that w ∈ τ 0 (W), where By applying the rules where in the final step, we use the rule ).The proof now proceeds just as in case ( 1), and we find that W ∈ ∇ * R (# 1 # 2 ), and as w ∈ τ 0 (W) and w ∈ R# 1 R# 2 R rev , we have w ∈ L 0 .Thus, we have T S (R) = L 0 .As R is a regular combing of S = S 1 * S 2 by Lemma 1.7, and as L 0 is context-free by Lemma 2.2, we conclude that (R, T S (R)) is a word-hyperbolic structure for S = S 1 * S 2 .This completes the proof of Theorem A.
By Lemma 1.6, we find the following explicit corollaries of Theorem A. COROLLARY 2.5.The semigroup free product of two word-hyperbolic monoids is word-hyperbolic.
COROLLARY 2.6.The semigroup free product of two (von Neumann) regular word-hyperbolic semigroups is word-hyperbolic.
COROLLARY 2.7.The semigroup free product of two word-hyperbolic semigroups with uniqueness is word-hyperbolic with uniqueness.
The final 'with uniqueness' in the statement of Corollary 2.7 follows from the fact that the elements of Alt + (R 1 , R 2 ) represent pairwise distinct elements of S. We now turn towards considering monoid free products.To do this, we first need to introduce a useful purely language-theoretic operation.

Polypartisan ancestors
In this section, we generalise (in a fairly uncomplicated manner) the bipartisan ancestors introduced in [90] to polypartisan ancestors, and prove that this construction preserves certain language-theoretic properties of the languages to which it is applied.We use this construction to obtain the multiplication table for a monoid free product from the table for a semigroup free product.
Let A be a finite alphabet, and let k ≥ 1.Let # 1 , # 2 , . . ., # k be k new symbols, and let We call x k (A) the full k-shuffled language (associated to A).Any subset of x k (A) is called a k-shuffled language (with respect to A).Thus, the 'word problem' in the sense of Duncan and Gilman [38] for a monoid generated by A is a 1-shuffled language, that is, a subset of x 1 (A), and its multiplication table is a 2-shuffled language, that is, a subset of x 2 (A).Furthermore, the solution set for a set of equations in k unknowns over a group is a k-shuffled language [29].For elements w ∈ x k (A), we introduce the To abbreviate even further, we write [u (k) ] for [u 0 , u 1 , . . ., u k ].Thus, the word problem for a monoid M consists of words [u (1) ] with u 0 = M u rev 1 , and a multiplication table for M consists of words of the form We call R (k) (L) the (k + 1)-partisan ancestor of L (with respect to R 0 , R 1 , . . ., R k ).Polypartisan ancestors generalise in an easy way the bipartisan ancestors introduced by the author in [90].It is clear that R (k) (L) is a k-shuffled language.The use for polypartisan ancestors in this present article is in preserving language-theoretic properties, in the following sense.PROPOSITION

Let C be a super-AFL. Let L ∈ C, and let R
The technique we use to prove Proposition 3.1 is a generalisation of a similar technique used to prove [90, Proposition 2.5], but follows its ideas rather closely.We first prove a weaker form of Proposition 3.1 (namely Lemma 3.2).We then use a rational transduction to move from the general case to this weaker form.
Let A 0 , A 1 , . . ., A k be k + 1 alphabets, with A ∩ A i = ∅ for all i, and with A i ∩ A j = ∅ for i j.We define the language and call x k (A 0 , . . ., A k ) a separated k-shuffle.For separated k-shuffles, preservation properties are simple to prove.
PROOF.This closely follows the proof of [90, Lemma 2.4], which is the case for k = 1, so we only sketch the main idea.As As C is a super-AFL, it has the monadic ancestor property, whence we find that ∇ * R (L) is in C. We, from this point on, assume that |A i | = |A| for all 0 ≤ i ≤ k, and fix bijections ϕ i : A → A i .We extend these to isomorphisms ϕ i : A * → A * i of free monoids.We let , where the action of ϕ i is entry-wise on the rules of R i .If R i is a C-monadic rewriting system, then so too clearly is R ϕ i .We define a rational transduction Then μ k is indeed rational, as it is of the form If μ k is applied to (the singleton language containing) exactly one word w ∈ x k (A), it clearly produces (the singleton language containing) exactly one word from x k (A 0 , . . ., A k ), and μ k is injective on x k (A).That is, if where u i ∈ A * , then and if w 1 , w 2 ∈ x k (A), then μ k (w 1 ) = μ k (w 2 ) if and only if w 1 ≡ w 2 , as each ϕ i is an isomorphism of free monoids.Slightly abusively, we write the equality in Equation k denote the inverse of the rational transduction μ k .Then the above amounts to saying that ).With less cumbersome notation, we have proved that w ∈ R (k) (L) if and only if there is some u ∈ L such that ).In other words, as w is arbitrary, we have This completes our discussion of polypartisan ancestors.

Monoid free products
In this section, we consider monoid free products.We begin by proving the main theorem for free products of word-hyperbolic monoids with 1-uniqueness (Theorem B).We then present a theorem which applies outside the 1-uniqueness case, to the cases when the combings R i of the factor monoids M i satisfy R * i = R i (Theorem 4.5).We then argue that these two cases are, in a certain sense, complementary (Section 4.3).
4.1.The case of 1-uniqueness.Suppose that M i (for i = 1, 2) is a word-hyperbolic monoid with 1-uniqueness, with respect to the regular combing R i .By definition, the only word in R i that represents the identity of M i is ε.Let R i = R i − {ε}.Then it is clear that every alternating word in Alt(R 1 , R 2 ) is reduced; for if u 0 u 1 • • • u n is the alternating factorisation of u ∈ Alt(R 1 , R 2 ), and u is not reduced, then u i = 1 in either M 1 or M 2 for some 0 ≤ i ≤ n, and hence u i ≡ ε, a contradiction to u i ∈ R 1 ∪ R 2 .Hence, by Lemma 1.8, monoid free products of monoids with 1-uniqueness behave essentially as semigroup free products of the same monoids, up to the fact that the product of two reduced sequences may not be reduced.
Using monadic ancestry, we may deal with this latter issue, and show the following main theorem.THEOREM B. Let M 1 , M 2 be two word-hyperbolic monoids with 1-uniqueness (with uniqueness).Then the monoid free product M 1 * M 2 is word-hyperbolic with 1-uniqueness (with uniqueness).
If M 1 (respectively M 2 ) are word-hyperbolic with 1-uniqueness, then the only element of R representing the identity element is ε, as the only element of R 1 (respectively R 2 ) representing the identity element of M 1 (respectively M 2 ) is ε.Analogously, if M 1 , M 2 are word-hyperbolic with uniqueness, then every alternating word is reduced, and hence every pair of distinct words in R represent distinct elements of M by Lemma 1.10.Hence, it suffices to show that M is word-hyperbolic with respect to R.
For i = 1, 2, we define the monadic rewriting system Now, the language of left-hand sides of # 1 in S i is where / denotes the right quotient, in this case by the regular language {# 2 ε}.As T M i (R i ) is a context-free language, so too is the quotient of T M i by any regular language.We conclude that S i is a context-free monadic rewriting system.Hence, the union S = S 1 ∪ S 2 is also a context-free monadic system.We define the language We prove that L 1 = T M (R), which suffices to prove the theorem (as a quick argument shows).This highlights that the language-theoretic properties of the monoid free product of word-hyperbolic monoids with 1-uniqueness are not significantly more complicated than those of the semigroup free product of the same.One direction is easy, and depends on little more than the two facts that (i PROOF.The proof of this is entirely analogous to that of Lemma 2.3, with one minor addition: note that if w 1 # 1 w 2 # 2 w rev 3 ∈ T S (R), then we have w 1 • w 2 = S w 3 and hence also We leave the (simple) details to the reader.
We remark (as is needed in Section 4.2) that the assumption of 1-uniqueness is not needed to prove Lemma 4.1.The nontrivial part of the equality T M (R) = L 1 is given by the following lemma.PROOF.Suppose w ≡ w 1 # 1 w 2 # 2 x rev ∈ T M (R).Then w 1 , w 2 , x ∈ R, and w 1 • w 2 = M x.By 1-uniqueness, w 1 , w 2 , and x are all necessarily reduced (though w 1 , w 2 may not be).
Hence, we can apply Lemma 1.11.If we are in case (1) or (2), then by Lemma 1.9, we have w 1 • w 2 = S x, and so w 1 # 1 w 2 # 2 x rev ∈ T S (R), and hence, using no rewritings, we find If we are instead in case (3), then we must use S nontrivially.As ), so by Equation (4-2), we also have w ∈ ∇ * S (T S (R)).We conclude by induction that w ∈ L 1 , as desired.
Hence, we have found a regular combing R of M such that T M (R) is given by the right-hand side of Equation (4-1).The right-hand side of Equation (4-1) is context-free, by the following chain of reasoning: (i) T S (R) ∈ CF by Theorem A; and hence (ii) ∇ * S (T S (R)) ∈ CF, as the class of context-free languages has the monadic ancestor property and S is a context-free monadic rewriting system; and (iii) thus, T M (R) ∈ CF as CF is closed under intersection with regular languages.Hence, (R, T M (R)) is a word-hyperbolic structure for M = M 1 * M 2 .This completes the proof of Theorem B.
Word-hyperbolicity with 1-uniqueness is not an unusual phenomenon.For example, it always holds in hyperbolic groups, so we find the following immediate corollary of Theorem B. COROLLARY 4.3.The free product of two hyperbolic groups is hyperbolic.PROOF.By [49, Theorem 1] (see also [38,Corollary 4.3]), a group is hyperbolic (in the geometric sense) if and only if it is word-hyperbolic (in the language-theoretic sense of this paper).Hence, as the monoid free product of two groups is the same as the (ordinary) free product of two groups, in view of Theorem B, it suffices to show that hyperbolic groups are word-hyperbolic with 1-uniqueness.However, every hyperbolic group G, generated by a finite set A, is word-hyperbolic with respect to the regular combing R ⊆ A * given by the language of geodesics in the Cayley graph of G, and there is only one geodesic corresponding to the identity element, see [32,Theorem 4.2].
Of course, Corollary 4.3 is well known in geometric group theory, and is not difficult to show geometrically.Our approach, via Theorem B, gives a proof which instead goes via formal language theory.4.2.-word-hyperbolic monoids.In this section, we describe a stronger property than word-hyperbolicity.Let M be a word-hyperbolic monoid with respect to a regular (w rev , 1) such that (w, 1) ∈ T .Then T rev is a context-free monadic rewriting system, as the class CF is closed under reversal.
Note that for every word w ∈ R, there exists some (not necessarily unique) reduced w ∈ R such that w − → * T w .Of course, as T is M-equivariant, for such w, w , we have w = M w .
Let R 1 = R 2 = T , and let R 3 = T rev .Consider the polypartisan ancestor Recall the definition of L 1 as Equation (4-1), and see Section 3 for notation pertaining to polypartisan ancestors.By Lemma 4.1 (and the remark following it), we have L 1 ⊆ T M (R).Hence, L 2 consists of some collection of words of the form w 1 # 1 w 2 # 2 x rev with w 1 , w 2 , x ∈ R such that there exist words w 1 , w 2 , x ∈ R with w 1 • w 2 = M x .As the systems R 1 and R 2 are M-equivariant, and R 3 is M rev -equivariant, it follows easily that w 1 = M w 1 , w 2 = M w 2 , and x = M x .Thus, w 1 • w 2 = M x, so it follows that L 2 ⊆ T M (R).We show the reverse inclusion, which (by a simple argument) suffices to show that M is word-hyperbolic.
PROOF.We have shown the inclusion for some parametrisation X.Now, w 1 may not be reduced; however, by removing each factor w i,j with w i,j = M X(i) = 1, we obtain a reduced word w 1 ≡ w 1,i 1 w 1,i 2 • • • w 1,i .Now, it may be the case that w 1,i j ∼ w 1,i j+1 , that is, that w 1,i j and w 1,i j+1 come from the same factor, and that the factorisation of w 1 is not alternating.However, and crucially, as R * X(i) = R X(i) , we can find some word w 1,i j ∈ R X(i) such that w 1,i j ≡ w 1,i j w 1,i j+1 .By merging all terms in this way, we find an alternating factorisation of w 1 , so w 1 ∈ R. Thus there exists a word w 1 ∈ R such that w 1 − → * T w 1 .In exactly the same way, there are words w 2 , x ∈ R such that w 2 − → * T w 2 and x − → * T x .In particular, x rev − → * T rev (x ) rev .We note in passing that w 1 • w 2 = M x , by M-equivariance.It follows from the above that As the words w 1 , w 2 , x ∈ R are reduced and satisfy w 1 • w 2 = M x , we have From Equations (4-4) and (4-5), we find immediately by the definition in Equation (4-3) that w ∈ L 2 , which is what was to be shown.
To finish our proof, we must simply conclude that L 2 is context-free, which follows by combining the facts that (i) L 1 is a context-free language (the proof of this uses nothing about 1-uniqueness); (ii) R (3) (L 1 ) is a context-free language by Proposition 3.1; and (iii) the intersection of a context-free language with a regular language is context-free.Hence, as T M (R) = L 2 by Lemma 4.6, it follows that (R, T M (R)) is a word-hyperbolic structure for M; as The reader may feel somewhat unsatisfied by the lack of a theorem stating simply that 'the free product of two word-hyperbolic monoids is word-hyperbolic' (see also Section 5).However, the combination of Theorems B and 4.5 essentially covers all cases of interest.We demonstrate this now, by showing that the -word-hyperbolic case can be viewed as a 'complement' to the 1-uniqueness case treated in Section 4.1.PROPOSITION 4.7.Suppose M 1 , M 2 are word-hyperbolic without 1-uniqueness with respect to regular combings R 1 (respectively R 2 ), and suppose further that the monoid free product M = M 1 * M 2 is word-hyperbolic with respect to some regular combing R. If Alt(R 1 , R 2 ) ⊆ R, then M 1 , M 2 , and M are all -word-hyperbolic.
We can thus simulate the multiplication table for M 2 with respect to R * 2 by using T M (R), and inserting sufficiently many z-symbols between the words in R * 2 ; rigorously, we perform a rational transduction of T M (R) to first obtain all words of the form Equation (4-7), and then kill all symbols z by a homomorphic image, and in this way obtain T M 2 (R * 2 ), which is thus context-free.Thus, M 2 is -word-hyperbolic; by symmetry, so too is M 1 .By Theorem 4.5, so too is M.
We remark on why this proposition is useful.Assume the notation of the proposition.Given the 'alternating' nature of a free product, it is very natural to ask for a regular combing R of M to at least contain the alternating products of elements from R 1 and R 2 .Indeed, if it did not, then the regular combing of the free product could be seen as wholly artificial, and not in any way dependent on the structure of the free factors.In this natural setting, Proposition 4.7 then tells us: if M 1 and M 2 are word-hyperbolic, but without 1-uniqueness, then we must have that M 1 and M 2 are in fact -word-hyperbolic.We elaborate on this remark in Section 4.3, and use this to suggest that a new definition of word-hyperbolic monoid may be suitable.No new results are presented therein, and so may be skipped without losing any readability of Section 5.

4.
3. 1-uniqueness as the norm.The definition of word-hyperbolic semigroups by Duncan and Gilman has been noted by Cain and Maltcev [26] to lead to some minor technical issues to be fixed.Namely, Cain and Maltcev note the following: there exist a finite set A, a regular language R ⊆ A + and two non-isomorphic semigroups S, T each generated by A such that T S (R) = T T (R).That is, the word-hyperbolic structure (R, T S (R)) does not necessarily determine the semigroup S up to isomorphism.(However, if considering monoids, this is not an issue, as the problem arises from the fact that some generators can be indecomposable in a semigroup, which never happens in monoids.)If, however, the associated homomorphism π : A + → S is assumed to be injective on A, then one can show that uniqueness up to isomorphism does hold [26,Proposition 3.5], and that furthermore every word-hyperbolic semigroup admits a word-hyperbolic structure with this additional 'injectivity on generators' requirement [26,Proposition 3.6].It is therefore no real restriction to impose the requirement on word-hyperbolic semigroups that π be injective on the generators.
In a similar vein, we would like to suggest that for word-hyperbolic monoids, the earlier result (Proposition 4.7) demonstrates that 1-uniqueness in word-hyperbolic monoids is natural.This argument is based on three desired premises: (1) the free product of two word-hyperbolic monoids ought to be word-hyperbolic; (2) a word-hyperbolic structure for a free product should reflect the structure of the free factors in an alternating manner; and (3) -word-hyperbolicity should be exceptional, rather than the norm.
If these premises are accepted, and premise (2) is interpreted as in the paragraph following Proposition 4.7, then we conclude from Proposition 4.7 that any given word-hyperbolic monoid ought to be either -word-hyperbolic, or else is word-hyperbolic with 1-uniqueness.The third premise would therefore guide us to prescribing that word-hyperbolic monoids with 1-uniqueness should be the norm.If the premises are accepted, a natural definition of word-hyperbolic monoid would thus be the following: a monoid M is word-hyperbolic if and only if it admits a finite generating set A and a regular combing R such that (i) the multiplication table T M (R) is context-free; and (ii) ε ∈ R, and this is the only word in R that represents 1 ∈ M. If this were the definition of word-hyperbolic monoid, then the free product of two word-hyperbolic monoids is again word-hyperbolic (Theorem B).
Whether these premises (1)-(3) are acceptable or not depends on the reader.Ideally, we would like to bypass this definition-based argument and say that every word-hyperbolic monoid admits a word-hyperbolic structure with 1-uniqueness, but we do not know whether this is the case.Indeed, one might suspect that this is not THEOREM A .Let C be a reversal-closed super-AFL.Let S 1 , S 2 be C-tabled semigroups.Then the semigroup free product S 1 * S 2 is C-tabled.THEOREM B .Let C be a reversal-closed super-AFL.Let M 1 , M 2 be C-tabled monoids.Then the monoid free product M 1 * M 2 is C-tabled.
These theorems, which are quite elegant to state, demonstrate that the property of having unique normal forms ensures that free products behave very well, although many of the difficulties from words representing the identity being inserted into other words are bypassed in this way.Additionally, Theorem B yields the corresponding result for groups and group free products, too, as the monoid free product of two groups coincides with the group free product of the same groups.In particular, we find the following corollaries, both corresponding to Corollary 4.3: COROLLARY 5.2.The free product of two ET0L-tabled groups is ET0L-tabled.

LEMMA 1 . 10 .
Let u, v ∈ R \ {ε} be reduced words.Then u = M v if and only if u = S v.

2 =
and we have w 1 ≡ x 1 ≡ b 2 and w 2 ≡ x 3 ≡ b 2 , and this satisfies w 1 • w 2 ≡ b 2 M x.We may reapply Lemma 1.11, and find ourselves in case (1), taking x 1 ≡ b 2 and x n ≡ b 2 .

COROLLARY 5 . 3 .
The free product of two IND-tabled groups is IND-tabled.This complements the aforementioned result by Duncan, Evetts, Holt and Rees for EDT0L-tabled groups.