Convergence rate of entropy-regularized multi-marginal optimal transport costs

We investigate the convergence rate of multi-marginal optimal transport costs that are regularized with the Boltzmann-Shannon entropy, as the noise parameter $\varepsilon$ tends to $0$. We establish lower and upper bounds on the difference with the unregularized cost of the form $C\varepsilon\log(1/\varepsilon)+O(\varepsilon)$ for some explicit dimensional constants $C$ depending on the marginals and on the ground cost, but not on the optimal transport plans themselves. Upper bounds are obtained for Lipschitz costs or locally semi-concave costs for a finer estimate, and lower bounds for $\mathscr{C}^2$ costs satisfying some signature condition on the mixed second derivatives that may include degenerate costs, thus generalizing results previously in the two marginals case and for non-degenerate costs. We obtain in particular matching bounds in some typical situations where the optimal plan is deterministic.


Notations
In all the article, N ∈ N * denotes the dimension of the ambient space R N and m ∈ N is an integer such that m ≥ 2.
open Euclidean ball of radius r centered at x in R d , dropping the supscript d when d = N ; a subset of R N for any index i ∈ {1, . . ., m} X product X 1 × . . .× X m whenever the X i 's are m subsets of R N ; A −i product 1≤j≤m,j =i A j if A = A 1 × . . .× A m ⊆ X and i ∈ {1, . . ., m}; x i , x, x a point in X i , in some X j , j ∈ {1, . . ., m}, and in X respectively; x q (x i ) i∈q if q ⊆ {1, . . ., m} and x ∈ X; e i i-th coordinate map e i : x = (x 1 , . . ., x m ) open ball of radius r centered at x ∈ (R N ) m for the above norm; C 0,1 loc (X) space of real-valued locally Lipschitz functions on X which is a sub-manifold of R N or (R N ) m ; [f ] C 0,1 (X) Lipschitz constant of f : X → R where X is a subset of R N or (R N ) m for the above norms; C 1,1 loc (X) space of differentiable real-valued functions on X, a sub-manifold of R N or (R N ) m , with locally Lipschitz differential; P(X) space of probability measures on a metric space X; L p norm induced by a measure µ, where p ∈ [1, +∞]; spt µ support of the measure µ; µ A restriction of the Borel measure µ to the Borel set A defined by µ A(E) = µ(A ∩ E) for every E; H s X s-dimensional Hausdorff measure on the metric space X endowed with the Borel σ-algebra (the subscript X will often be dropped); space of real matrices of size N × N , endowed with the Frobenius norm induced by the scalar product A • B := Tr(A T B), for A, B ∈ M N (R); S N (R) subspace of real symmetric matrices of size N × N ; ∆ P simplex of P-uples t = (t p ) p∈P such that ∀p, t p ≥ 0 and p∈P t p = 1.

Introduction
We consider a m−uple of probability measures µ i compactly supported on sub-manifolds X i ⊆ R N of dimension d i and a cost function c : X 1 × . . .× X m → R + .The Entropic Multi-Marginal Optimal Transport problem is defined as : where Π(µ 1 , . . ., µ m ) denotes the set of all probability measures γ having µ i as i-th marginal, i.e. (e i ) ♯ γ = µ i where e i : (x 1 , . . ., x m ) → x i , for every i ∈ {1, . . ., m}.The classical multi-marginal optimal transport problem corresponds to the case where ε = 0.
In the last decade, these two classes of problems (entropic optimal transport and multimarginal optimal transport) have witnessed a growing interest and they are now an active research topic.Entropic optimal transport (EOT) has found applications and proved to be an efficient way to approximate Optimal Transport (OT) problems, especially from a computational viewpoint.Indeed, when it comes to solving EOT by alternating Kullback-Leibler projections on the two marginal constraints, by the algebraic properties of the entropy such iterative projections correspond to the celebrated Sinkhorn's algorithm [Sin64], applied in this framework in the pioneering works [Cut13;Ben+15].The simplicity and the good convergence guarantees (see [FL89; MG20; Car22; GN22]) of this method compared to the algorithms used for the OT problems, then determined the success of EOT for applications in machine learning, statistics, image processing, language processing and other areas (see the monograph [PC19] or the lecture notes [Nut] and references therein ).
As concerns multi-marginal optimal transport (MOT), it arises naturally in many different areas of applications, including economics [CE10], financial mathematics [BHP13; DS14a; DS14b; Enn+22], statistics [BK18; CCG16], image processing [Rab+11], tomography [Abr+17], machine learning [Haa+21; TJK22], fluid dynamics [Bre89] and quantum physics and chemistry, in the framework of density functional theory [BDG12; CFK13; FGG23].The structure of solutions to the multi-marginal optimal transport problem is a notoriously delicate issue, and is still not well understood, despite substantial efforts on the part of many researchers [GŚ98; Car03; CN08; Hei02; Pas11; Pas12; KP14; KP15; CDD15; CS16; PV21b; MP17; PV21a]; see also the surveys [Pas15] and [DGN17].Since MOT ε can be seen a perturbation of MOT 0 , it is natural to study the behaviour as ε vanishes.In this paper we are mainly interested in investigating the rate of convergence of the entropic cost MOT ε to MOT 0 under some mild assumptions on the cost functions and marginals.
In particular we are going to extend the techniques introduced in [CPT23] for two marginals to the multi-marginal case which will also let us generalize the bounds in [CPT23] to the case of degenerate cost functions.For the two marginals and nondegenerate case we also refer the reader to a very recent (and elegant) paper [MS23] where the authors push a little further the analysis of the convergence rate by disentangling the roles of ´c dγ and the relative entropy in the total cost and deriving convergence rate for both these terms.Notice that concerning the convergence rate of the entropic multimarginal optimal transport an upper bound has been already established in [EN23], which depends on the number of marginals and the quantization dimension of the optimal solutions to (MOT ε ) with ε = 0.Here we provide an improved, smaller, upper bound, which will depend only on the marginals, but not on the optimal transport plans for the un-regularized problem, and we also provide a lower bound depending on a signature condition on the mixed second derivatives of the cost function, that was introduced in [Pas12].The main difficulty consists in adapting the estimates of [CPT23] to the local structure of the optimal plans described in [Pas12].
Our main findings can be summarized as follows: we establish two upper bounds, one valid for locally Lipschitz costs and a finer one valid for locally semi-concave costs.The proofs rely, as in [CPT23], on a multi-marginal variant of the block approximation introduced in [Car+17].Notice that in this case the bound will depend only on the dimension of the support of the marginals.Moreover, for locally semi-concave cost functions, by exploiting Alexandrov-type results as in [CPT23], we improve the upper bound by a 1/2 factor, obtaining the following inequality for some We stress that this upper bound is smaller or equal than the one provided in [EN23, Theorem 3.8], which is of the form 1 2 (m − 1)Dε log(1/ε)+ O(ε) where D is a quantization dimension of the support of an optimal transport plan.Thus D must be greater or equal than the maximum dimension of the support of the marginals, and of course The inequality may be strict for example in the two marginals case with unequal dimension, as shown in Section 5.
For the lower bound, from the dual formulation of (MOT ε ) we have where E(x 1 , . . ., x m ) = c(x 1 , . . ., x m ) − ⊕ m i=1 φ i (x i ) is the duality gap and (φ 1 , . . ., φ m ) are Kantorovich potentials for the un-regularized problem (MOT ε ) with ε = 0.By using the singular values decomposition of the bilinear form obtained as an average of mixed second derivatives of the cost and a signature condition introduced in [Pas11], we are able to prove that E detaches quadratically from the set {E = 0} and this allows us to estimate the previous integral in the desired way as in [CPT23] and improve the results in [EN23] where only an upper bound depending on the quantization dimension of the solution to the un-regularized problem is provided.Moreover, this slightly more flexible use of Minty's trick compared to [CPT23] allows us to obtain a lower bound also for degenerate cost functions in the two marginals setting.Given a κ depending on a signature condition (see (PS(κ))) on the second mixed derivatives of the cost, the lower bound can be summarized as follows The paper is organized as follows: in Section 2 we recall the multi-marginal optimal transport problem, some results concerning the structure of the optimal solution, in particular the ones in [Pas11], and define its entropy regularization.Section 3 is devoted to the upper bounds stated in Theorem 3.2 and Theorem 3.7.In Section 4 we establish the lower bound stated in Proposition 4.2.Finally, in Section 5 we provide some examples for which we can get the matching bounds.

Preliminaries
Given m probability compactly supported measures µ i on sub-manifolds X i of dimension d i in R N for i ∈ {1, . . ., m} and a continuous cost function c : X 1 × X 2 × . . .× X m → R + , the multi-marginal optimal transport problem consists in solving the following optimization problem where X := X 1 × X 2 × . . .× X m and Π(µ 1 , . . ., µ m ) denotes the set of probability measures on X whose marginals are the µ i .The formulation above is also known as the Kantorovich problem and it amounts to a linear minimization problem over a convex, weakly compact set; it is then not difficult to prove the existence of a solution by the direct method of calculus of variations.Much of the attention in the optimal transport community is rather focused on uniqueness and the structure of the minimizers.
In particular, one is mainly interested in determining if the solution is concentrated on the graph of a function (T 2 , . . ., T m ) over the first marginal, where (T i ) ♯ µ 1 = µ i for i ∈ {1, . . ., m}, in which case this function induces a solution à la Monge, that is γ = (Id, T 2 , . . ., T m ) ♯ µ 1 .
In the two marginals setting, the theory is fairly well understood and it is well-known that under mild conditions on the cost function (e.g.twist condition) and marginals (e.g.being absolutely continuous with respect to Lebesgue), the solution to (MOT) is unique and is concentrated on the graph of a function ; we refer the reader to [San15] to have glimpse of it.The extension to the multi-marginal case is still not well understood, but it has attracted recently a lot of attention due to a diverse variety of applications.
In particular in his seminal works [Pas11; Pas12] Pass established some conditions, more restrictive than in the two marginals case, to ensure the existence of a solution concentrated on a graph.In this work we rely on the following (local) result in [Pas12] giving an upper bound on the dimension of the support of the solution to (MOT).Let P be the set of partitions of {1, . . ., m} into two non-empty disjoint subsets: p = {p − , p + } ∈ P if p − p + = {1, . . ., m}, p − p + = ∅ and p − , p + = ∅.Then for each p ∈ P we denote by g p the bilinear form on the tangent bundle T X for every p, q ⊆ {1, . . ., m}, and , defined for every i, j on the whole tangent bundle T X. Define to be the convex hull generated by the g p , then it is easy to verify that each g ∈ G c is symmetric and therefore its signature, denoted by (d + (g), d − (g), d 0 (g)), is well defined.Then, the following result from [Pas12] gives a control on the dimension of the support of the optimizer(s) in terms of these signatures.
Theorem 2.1 (Part of [Pas12, Theorem 2.3]).Let γ a solution to (MOT) and suppose that the signature of some , that is the number of positive, negative and zero eigenvalues.Then, there exists a neighbourhood For the following it is important to notice that by standard linear algebra arguments we have for each This implies that the smallest bound on the dimension of spt γ which Theorem 2.1 can provide is max i d i .
Remark 2.3 (Two marginals case).When m = 2, the only g ∈ G c coincides precisely with the pseudo-metric introduced by Kim and McCann in [KM10].Assuming for simplicity that d 1 = d 2 = d, they noted that g has signature (d, d, 0) whenever c is non-degenerate so Theorem 2.1 generalizes their result since it applies even when non-degeneracy fails providing new information in the two marginals case: the signature of g is (r, r, 2d − 2r) where r is the rank of D 2 x 1 x 2 c.Notice that this will help us to generalize the results established in [CPT23;EN23] to the case of a degenerate cost function.
It is well known that under some mild assumptions the Kantorovich problem (MOT) is dual to the following Besides, it admits solutions (φ i ) 1≤i≤m , called Kantorovich potentials, when c is continuous and all the X i 's are compact, and these solutions may be assumed c-conjugate, in the sense that for every i ∈ {1, . . ., m} We recall the entropic counterpart of (MOT): given m probability measures µ i on X i as before, and a continuous cost function c : where Ent(•| ⊗ m i=1 µ i ) is the Boltzmann-Shannon relative entropy (or Kullback-Leibler divergence) w.r.t. the product measure ⊗ m i=1 µ i , defined for general probability measures p, q as The fact that q is a probability measure ensures that Ent(p | q) ≥ 0. The dual problem of (MOT ε ) reads as for some recent presentations.It admits an equivalent "log-sum-exp" form: which is invariant by the same transformations without assuming m i=1 λ i = 0. From (MOT ε ) and (MD ε ) we recover, as ε → 0, the unregularized multi-marginal optimal transport (MOT) and its dual (MD) we have introduced above.The link between multi-marginal optimal transport and its entropic regularization is very strong and a consequence of the Γ−convergence of (MOT ε ) towards (MOT) (one can adapt the proof in [Car+17] or see [BCN19; GKR20] for Γ−convergence in some specific cases) is that By the direct method in the calculus of variations and strict convexity of the entropy, one can show that (MOT ε ) admits a unique solution γ ε , called optimal entropic plan.Moreover, there exist m real-valued Borel functions φ ε i such that , and in particular we have that and these functions have continuous representatives and are uniquely determined up a.e. to additive constants.The reader is referred to the analysis of [MG20], to [Nen16] for the extension to the multi-marginal setting, and to [BL92; BLN94; Csi75; FG97; RT98] for earlier references on the two marginals framework.
The functions φ ε i in (2.3) are called Schrödinger potentials, the terminology being motivated by the fact that they solve the dual problem (MD ε ) and are as such the (unique) solutions to the so-called Schrödinger system: for all i ∈ {1, . . ., m}, where X −i = m 1≤j≤m,j =i X j .Note that (2.5) is a "softmin" version of the multi-marginal c-conjugacy relation for Kantorovich potentials.

Upper bounds
We start by establishing an upper bound, which will depend on the dimension of the marginals, for locally Lipschitz cost functions.We will then improve it for locally semiconcave (in particular C 2 ) cost functions.

Upper bound for locally Lipschitz costs
The natural notion of dimension which arises is the entropy dimension, also called information dimension or Rényi dimension [Rén59].
Definition 3.1 (Rényi dimension (following [You82])).If µ is a probability measure over a metric space X, we set for every δ > 0, where the infimum is taken over countable partitions (A n ) n∈N of X by Borel subsets of diameter less than δ, and we define the lower and upper entropy dimension of µ respectively by: dim R (µ) := lim inf Notice that if µ is compactly supported on a Lipschitz manifold of dimension d, then N δ (spt µ) ≤ d log(1/δ) + C for some constant C > 0 and δ ∈ (0, 1], where N δ (spt µ) is the box-counting number of spt µ, i.e. the minimal number of sets of diameter δ > 0 which cover spt µ.In particular, by concavity of t → t log(1/t), we have (3.1) We refer to the beginning of [CPT23, §3.1] for additional information and references on Rényi dimension.
The following theorem establishes an upper bound for locally Lipschitz costs.
Theorem 3.2.Assume that for i ∈ {1, . . ., m}, µ i ∈ P(X i ) is a compactly supported measure on a Lipschitz sub-manifold X i of dimension d i and c ∈ C 0,1 loc (X), then Proof.Given an optimal plan γ 0 for MOT 0 , we use the so-called "block approximation" introduced in [Car+17].For every δ > 0 and i ∈ {1, . . ., m}, consider a partition X i = n∈N A n i of Borel sets such that 1 diam(A n i ) ≤ δ for every n ∈ N, and set and finally, By definition, γ δ ≪ ⊗ m i=1 µ i and we may check that its marginals are the µ i 's.Besides, Let us compute its entropy and assume for simplicity that the measure µ m is the one such that dim R (µ m ) = max i∈{1,...,m} dim(µ i ): the last inequality coming from the inequality γ 0 (A n 1 1 × . . .× A nm m ) ≤ µ m (A nm m ).Taking partitions (A n j ) n∈N of diameter smaller than δ such that n j ∈N µ j (A Since the µ i 's have compact support and c is locally Lipschitz, for δ small enough there exists L ∈ (0, +∞) not depending on δ such that [c] C 0,1 (A) ≤ L for every Thus taking γ δ as competitor in (MOT ε ) we obtain: Taking δ = ε and recalling that the µ j 's are concentrated on sub-manifolds of dimension d j , which implies that for some C * ≥ L + 1 and for every j ∈ {1, . . ., m}, we get Remark 3.3.If the µ i 's are merely assumed to have compact support (not necessarily supported on a sub-manifold), the above proof actually shows the slightly weaker estimate Indeed, for every i, by definition of dim R (µ i ) =:

Upper bound for locally semi-concave costs
We provide now a finer upper bound under the additional assumptions that the X i 's are C 2 sub-manifolds of R N , c is locally semi-concave as in Definition 3.4 (which is the case when c ∈ C 2 (X, R + )), and the µ i 's are measures in 2 is concave on Ω. Lemma 3.5 (Local semiconcavity and covering).Let c : X → R + be a locally semiconcave cost function and (φ i ) 1≤i≤m ∈ ≤i≤m C (K i ) be a system of c-conjugate functions as in (2.2) defined on compact subsets K i ⊆ X i .We can find λ ∈ R, J ∈ N * and for every i ∈ {1, . . ., m} a finite open covering (U j i ) 1≤j≤J of K i together with bi-Lipschitz local charts ψ j i : U j i → Ω j i satisfying the following properties, having set Ω j := 1≤i≤m Ω j i i and ψ j := (ψ j 1 1 , . . ., ψ jm m ) for every j = (j 1 , . . ., j m ) ∈ {1, . . ., J} m : (i) for every j ∈ {1, . . ., J} m , c • (ψ j ) −1 is λ-concave on Ω j , (ii) for every (i, j) ∈ {1, . . ., m} × {1, . . ., J}, In particular, all the φ i 's are locally semiconcave.
We are going to use an integral variant of Alexandrov Theorem which is proved in [CPT23]. (3.5) We may now state the main result of this section.

Lower bound for C 2 costs with a signature condition
In this section we consider a cost c ∈ C 2 (X, R + ) where X = X 1 × . . .× X m and we will assume that for every i ∈ {1 . . ., m}, the measure µ i is compactly supported on a C 2 sub-manifold X i ⊆ R N of dimension d i .We are going to establish a a lower bound in the same form as the fine upper bound of Theorem 3.7, the dimensional constant being this time related to the signature of some bilinear forms, following ideas from [Pas12].
Proof.Let p = {p − , p + } ∈ P .For y ∈ i∈p ± K i , we set We identify any x ∈ K with (x p − , x p + ).Since the φ i 's are c-conjugate, for x, x ′ ∈ K it holds:

Now we do computations in local charts
diffeomorphisms such that B R (x i ) ⊆ U i for some R > 0 and ψ i (U i ) are balls centered at 0 for every i ∈ {1, . . ., m}.With a slight abuse, we use the same notation for points and functions written in these charts, and use Taylor's integral formula3 : where η is the maximum for p ∈ P of the moduli of continuity of D 2 p − p + c at x. Since η is independent from p and tends to 0 as r → 0 because c is C 2 , and by definition Taking g x = p∈P t p g p (x) for some (t p ) p∈P ∈ ∆ P and averaging the previous inequality yields: Finally, we can find a linear isomorphism Q ∈ GL( m i=1 d i , R) which diagonalizes g x, such that after setting u := Q • (ψ 1 , . . ., ψ m ) and denoting u = (u + , u − , u 0 ) : Reporting this in (4.2), we get the result by replacing η with Q −1 η and restricting u to ) for some small ρ > 0. We will use the following positive signature condition: Proposition 4.2.Let c ∈ C 2 (X) and assume that for every i ∈ {1, . . ., m}, ) is a probability measure compactly supported in X i .If (PS(κ)) is satisfied, then there exists a constant C * ∈ [0, ∞) such that for every ε > 0, Proof.The measures µ i being supported on some compact subsets K i ⊆ X i , consider a family (φ i ) 1≤i≤m ∈ m i=1 C (K i ) of c-conjugate Kantorovich potentials.Taking (φ i ) 1≤i≤m as competitor in (MD ′ ε ), we get the lower bound: where E := c − ⊕ m i=1 φ i on K = m i=1 K i as in Lemma 4.1.We are going to show that for some constant C > 0 and for every ε > 0, which yields (4.3) with C * = log(C).For every x ∈ K, we consider a quadratic form g x ∈ {g(x) | g ∈ G c } of signature (κ, d − , d 0 ), which is possible thanks to (PS(κ)), and take a local chart4 u as given by Lemma 4.1, such that (4.1) holds with η(r) ≤ 1/2 for every r such that B r (x) ⊆ U .Notice that u x is bi-Lipschitz with some constant L x on V For every i ∈ {1, . . ., m} we may write µ i = f i H d i X i for some density f i : X i → R + .By applying several times the co-area formula [Fed96, Theorem 3.2.22] to the projection maps onto X i , we may justify that We set +∞] and we apply the area formula: As a consequence we obtain: for some constant C x > 0 (which depends on x through R, d − and d 0 ).The sets {V x} x∈Σ form an open covering of the compact set Σ := {x ∈ K | E(x) = 0}, hence we may extract a finite covering V x1 , . . ., V xL and for every ε > 0: for some constant C 1 ∈ (0, +∞).Finally, since E is continuous and does not vanish on the compact set K for some constant C > 0. This concludes the proof.

Examples and matching bound
We devote this section to applying the results we have stated above to several cost functions.For simplicity we can assume that the dimensions of the X i are all equal to some common d and the cost function c is C 2 .As in [Pas12] we consider, for the lower bound, the metric g such that t p = 1 2 m−1 −1 for all p ∈ P , we remind that P is the set of partition of {1, . . ., m} into two non empty disjoint subsets.
Example 5.1 (Two marginals case).In previous works [CPT23; EN23] concerning the rate of convergence for the two marginals problem, it was assumed that the cost function must satisfy a non degeneracy condition, that is D 2 x 1 x 2 c must be of full rank.A direct consequence of our analysis is that we can provide a lower bound (the upper bound does not depend on such a condition) for costs for which the non-degeneracy condition fails.Let r be the rank of D 2 x 1 x 2 c at the point where the non-degeneracy condition fails, then the signature of g at this point is given by (r, r, 2d− 2r) meaning that locally the support of the optimal γ 0 is at most 2d − r dimensional.Thus, the bounds become Example 5.2 (Two marginals case and unequal dimension).Consider now the two marginals case but unequal dimensional, that is for example d 1 > d 2 .Then, if D 2 x 1 ,x 2 c has full rank, that is r = d 2 , we obtain a matching bound depending only on the lower dimensional marginal for some constants C * , C * > 0. If µ 1 is absolutely continuous with respect to H d 1 on some smooth sub-manifold of dimension d 1 , then any optimal transport plan would be concentrated on a set of Hausdorff dimension no less than d 1 , and thus the upper bound given in [EN23, Theorem 3.8] would be d 1 2 ε log(1/ε) + O(ε), which is strictly worse than our estimate.
Example 5.3 (Negative harmonic cost).Consider the cost c(x 1 , . . ., x m ) = h( m i=1 x i ) where h is C 2 and D 2 h > 0. Assuming that the marginals have finite second moments, when h(x) = |x| 2 this kind of cost is equivalent to the harmonic negative cost that is c(x 1 , . . ., x m ) = − i<j |x i − x j | 2 (here | • | denotes the standard euclidean norm), see [DGN17] for more details.It follows now that the signature of the metric g is (d, (m − 1)d, 0) thus the bounds between MOT ε and MOT 0 that we obtain are for some constants C * , C * > 0. We remark that it is known from [Pas12; DGN17] that a transport plan γ 0 is optimal if and only if it is supported on the set {(x 1 , . . ., x m ) | m i=1 x i = l}, where l ∈ R d is any constant and there exists solutions whose support has dimension exactly (m − 1)d.
Example 5.4 (Gangbo-Święch cost and Wasserstein barycenter).Suppose that c(x 1 , . . ., x m ) = i<j |x i − x j | 2 , known as the Gangbo-Święch cost [GŚ98].Notice that the cost is equivalent to c(x 1 , . . ., x m ) = h( m i=1 x i ) where h is C 2 and D 2 h < 0,then the signature of g is ((m − 1)d, d, 0) and we have a matching bound Notice now that considering the MOT 0 problem with a cost c(x 1 , . . ., x m ) = i |x i − T (x 1 , . . ., x m )| 2 , where T (x 1 , . . ., x m ) = m i=1 λ i x i is the Euclidean barycenter , is equivalent to the MOT 0 with the Gangbo-Święch cost and the matching bound above still holds.Moreover, the multi-marginal problem with this particular cost has been shown [AC11] to be equivalent to the Wasserstein barycenter, that is T ♯ γ 0 = ν is the barycenter of µ 1 , . . ., µ m .
4 Lower bound for C 2 costs a system of c-conjugate functions on subsets K i ⊆ X i for every i.We set E := c − φ 1 ⊕ ...⊕ φ m on K := K 1 × ...× Km and we take x ∈ K as well as someg x ∈ {g(x) | g ∈ G c } of signature (d + , d − , d 0 ), G cbeing defined in (2.1).Then there exists local coordinates around x, i.e.C 2 diffeomorphisms u