Multiplicity one theorems over positive characteristic

Abstract In Aizenbud et al. (2010, Annals of Mathematics 172, 1407–1434), a multiplicity one theorem is proved for general linear groups, orthogonal groups, and unitary groups ( 
$GL, O,$
 and U) over p-adic local fields. That is to say that when we have a pair of such groups 
$G_n{\subseteq } G_{n+1}$
 , any restriction of an irreducible smooth representation of 
$G_{n+1}$
 to 
$G_n$
 is multiplicity-free. This property is already known for 
$GL$
 over a local field of positive characteristic, and in this paper, we also give a proof for 
$O,U$
 , and 
$SO$
 over local fields of positive odd characteristic. These theorems are shown in Gan, Gross, and Prasad (2012, Sur les Conjectures de Gross et Prasad. I, Société Mathématique de France) to imply the uniqueness of Bessel models, and in Chen and Sun (2015, International Mathematics Research Notice 2015, 5849–5873) to imply the uniqueness of Rankin–Selberg models. We also prove simultaneously the uniqueness of Fourier–Jacobi models, following the outlines of the proof in Sun (2012, American Journal of Mathematics 134, 1655–1678). By the Gelfand–Kazhdan criterion, the multiplicity one property for a pair 
$H\leq G$
 follows from the statement that any distribution on G invariant to conjugations by H is also invariant to some anti-involution of G preserving H. This statement for 
$GL, O$
 , and U over p-adic local fields is proved in Aizenbud et al. (2010, Annals of Mathematics 172, 1407–1434). An adaptation of the proof for 
$GL$
 that works over of local fields of positive odd characteristic is given in Mezer (2020, Mathematische Zeitschrift 297, 1383–1396). In this paper, we give similar adaptations of the proofs of the theorems on orthogonal and unitary groups, as well as similar theorems for special orthogonal groups and for symplectic groups. Our methods are a synergy of the methods used over characteristic 0 (Aizenbud et al. [2010, Annals of Mathematics 172, 1407–1434]; Sun [2012, American Journal of Mathematics 134, 1655–1678]; and Waldspurger [2012, Astérisque 346, 313–318]) and of those used in Mezer (2020, Mathematische Zeitschrift 297, 1383–1396).


Introduction
Let F be a local field of positive characteristic different from 2. Let K be either equal to F or an extension of it of degree 2. Let V be a vector space of dimension n over K. Let W ∶= V ⊕ Kv n+1 be an (n + 1)-dimensional vector space containing it. Assume that we have a nondegenerate Hermitian (symmetric in the case K = F) form on W, with respect to which V is orthogonal to v n+1 . Note that this implies, in particular, that ⟨v n+1 , v n+1 ⟩ ≠ 0. We will denote G to be either the group O or SO in the case K = F or U in the case K ≠ F. Consider the group G(V ) as a subgroup of G(W).
The following theorem is among the main theorems proved in this paper.
Let us define an anti-involution σ of G(W) for all three families of classical groups: In the case of O, define σ ∶ g ↦ g −1 .
In the case of SO, choose T ∈ G(V ) of order 2 with det T = (−1) ⌊ n+1 2 ⌋ : One may choose such an element by taking a basis of V with respect to which the symmetric form is diagonal, and in this basis, take a diagonal matrix of ±1 with the appropriate parity of −1 entries. Define σ ∶ g ↦ T g −1 T.
In the case of U, choose a basis of W for which the Hermitian product of all pairs lies in F (for example, by choosing a basis that diagonalizes the Hermitian form). Then we have an involution T ∶ v ↦v (writing v as a vector in this basis). Define an antiinvolution of G(W) by σ ∶ g ↦ T g −1 T.
Consider the action of G(V ) on G(W) by conjugation. The following theorem implies Theorem 1.1 using the Gelfand-Kazhdan criterion.

Theorem 1.2 Any G(V )-invariant distribution on G(W) is also invariant under σ.
The proof of this implication in zero characteristic is given in [12,Appendix B], [3,Section 1], and [14], and the same proofs apply verbatim in arbitrary odd characteristic.
We also prove another theorem which we shall now describe, given in [12] for characteristic 0. One may also look there for more extensive explanations about the basic notations and definitions used. This theorem will be related to the uniqueness of Fourier-Jacobi models, and will regard all the previous families of classical groups, as well as Sp.
Let A be a finite-dimensional commutative involutive algebra over F, and let V be a finitely generated A-module. Let ε = ±1, and let τ be the involution of A. Assume that V is equipped with a nondegenerate ε-Hermitian form, i.e., a nondegenerate Fbilinear map ⟨⋅, ⋅⟩ ∶ V × V → A satisfying A-linearity in the first argument, and ⟨v, u⟩ = ε⟨u, v⟩ τ . Denote by S the group of all A-module automorphisms of V which preserve this form. It is a finite product of general linear groups, unitary groups, orthogonal groups, and symplectic groups. Denote by A τ=−ε the subset of A of elements a satisfying a τ = −εa. Let H be the Heisenberg group defined as We have a natural action of S on H. Denote by J ∶= H ⋊ S the semidirect product of H and S with respect to this action. We prove in this paper the following theorem.
one property for GL(n + 1), GL(n), which is proved for positive characteristic in [9,Theorem 1.2] and in [1]. Theorem 1.9 (Uniqueness of Bessel models) Let V be a linear space with a symmetric or Hermitian (including the case K = F × F) form. Denote the respective orthogonal or unitary group G(V ). Let W be a subspace of odd codimension on which the form is nondegenerate and so that W ⊥ is split. Let H be the Bessel group corresponding to W, considered as a subgroup of G(V ) × G(W), and let ν be a generic character of H. Then, for any irreducible smooth representations π of G(V ) and π ′ of G(W), one has dim Hom H (π ⊗ π ′ , ν) ≤ 1.   [8,Chapter 12] (depending on some choices of characters). Take either π to be an irreducible smooth representations ofG(V ) (an appropriate double cover of G(V )) and π ′ to be such a representation of G(W), or the other way around, i.e., π to be an irreducible smooth representation of G(V ) and π ′ to be such a representation of an appropriateG(W). Then one has dim HomH(π ⊗ π ′ , ν) ≤ 1.

Comparison with previous works
In [9], the proof of a multiplicity one theorem for GL n in characteristic 0 is extended to include also positive odd characteristic. The premise of this paper is to use these methods to extend the proof of additional multiplicity one theorems from characteristic 0 to positive odd characteristic. The proofs for characteristic 0 on which we base this paper are given in [3,12,14]. Let us give an overview of the methods and steps of this paper, explaining which ones are taken from [3,12,14], which ones were introduced in [9], and which ones are new to this paper. In Section 3, we give reformulations of the problems in a way identical to the ones given in [3,12,14].
In Section 4, we use a certain analog of the Harish-Chandra descent method for positive characteristic, that gives weaker results than in the zero characteristic case. The entirety of this method as used in [3,12,14] fails over positive characteristic fields, due to nonseparable extensions, and in fact, this is the crucial point in which these proofs fail for a positive characteristic.
In Section 5, we pass from the group to its Lie algebra using Cayley transform. The difference from the analogous linearization in [3,12,14] is that, in these papers, linearization is done after using the method of Harish-Chandra descent to restrict the possible support to the unipotent cone, whereas we only have a weaker restriction on the support.

D. Mezer
In Section 6, we adapt the main new ideas of [9] to the unitary, orthogonal, and symplectic settings, introducing a new family ρ of automorphisms playing the same role as ρ in [9]. Section 7 uses the method of stratification to reduce the problem to a problem on a single orbit. The contents of this section are completely analogous to what is done in [3,12,14], only without the restriction of nilpotency, which is not truly needed, as was the case in [9].
In Section 8, we solve the previous problem on a single orbit by repeating the arguments and ideas used in [3,12,14], sometimes giving slight generalizations of them.
The archimedean version of Theorems 1.1 and 1.4 can be found in [13]. Special cases of Theorems 1.1 and 1.2 can be found in [4].

Preliminaries and notation
Most of this section is borrowed from the preliminaries sections of [2,3], and also of [9] (which was also mostly borrowed from the previous two).
Let us now introduce a uniform notation for all the groups O, SO, U, and Sp. Note that the case G = Sp was not included in Theorems 1.1 and 1.2, but it will be relevant for Theorems 1.3 and 1.4. Let F be a local field of characteristic different from 2. Let K be a field which is either equal to F or a quadratic field extension of it. Let λ ↦λ be either the nontrivial automorphism of K/F or the identity automorphism if K = F. Let V be a K-linear space of dimension n. Assume that we have on V a nondegenerate sesquilinear form B which is either symmetric, Hermitian, or symplectic (in the Hermitian case K ≠ F and in the other cases K = F). Denote by G = G(V ) one of the groups O(V ), SO(V ), U(V ), or Sp(V ). Denote by g = g(V ) the Lie algebra of G, which is either o(V ), u(V ), or sp(V ), i.e., linear transformations A satisfying A * = −A with respect to the symmetric, Hermitian, or symplectic form. In the O, SO, U cases, assume that we have W ⊇ V of dimension n + 1 with an extension of ⟨⋅, ⋅⟩ to a form on W of the same type. In these cases, we have also G(W), and we may consider G as a subgroup of it.
LetG denote the subgroup of Aut F (V ) × {±1} consisting of all (T, δ) such that ⟨Tu, Tv⟩ = ⟨u, v⟩ if δ = 1 and ⟨Tu, Tv⟩ = ⟨v, u⟩ if δ = −1. In the case that G = SO, we also require that det T = δ ⌊ n+1 2 ⌋ . This group contains G as a subgroup of index 2. Denote by χ ∶G → ±1 the character (T, δ) ↦ δ. We have natural actions of G on G(W), G, g, V (by conjugation on all but V, on which we let G act in the usual way). This action extends to an action ofG by (T, δ).A ∶= TA δ T −1 on G(W) and G, by (T, δ).A ∶= δTAT −1 on g, and by (T, δ).v ∶= δTv on V.
be the characteristic polynomial map. We shall also consider it as a map from G × V , by first projecting onto G.
We shall use the standard terminology of l-spaces introduced in [6, Section 1]. We denote by S(Z) the space of Schwartz functions on an l-space Z, and by S * (Z) the space of distributions on Z equipped with the weak topology. Notation 2.2 (Fourier transform) Let W be a finite-dimensional vector space over F with a nondegenerate bilinear form B on W. We denote by F B ∶ S * (W) → S * (W) the Fourier transform defined using B and the self-dual Haar measure on W. If W is clear from the context, we sometimes omit it from the notation and denote F = F W .

Remark 2.3
In the Hermitian case, we take for Fourier transform the F-bilinear form given by taking the trace of the Hermitian form.
Informally, it means that, in order to prove a certain property of distributions on Z, it is enough to prove that distributions on every fiber Z t have this property.

Corollary 2.5
Let q ∶ Z → T be a continuous map of l-spaces. Let an l-group H act on Z preserving the fibers of q. Let μ be a character of H. Suppose that, for any t ∈ T, S * (q −1 (t)) H, μ = 0. Then S * (Z) H, μ = 0.  To formulate (iii) explicitly, let W be a finite-dimensional linear space over F with a nondegenerate bilinear form B, and suppose that H acts on W linearly preserving B. Then, for any ξ ∈ S * (Z × W) H, μ , we have F B (Fr(ξ)) = Fr(F B (ξ)), where Fr is taken with respect to the projection Z × W → Z.

Remark 2.8
Let Z be an l-space, and let Q ⊂ Z be a closed subset. We may identify S * (Q) with the space of all distributions on Z supported on Q. In particular, we can restrict a distribution ξ to any open subset of the support of ξ.

Definition 2.9
An element A ∈ gl(V ) is said to be regular if its minimal polynomial is equal to its characteristic polynomial. In case that this polynomial f of A is a power of an irreducible polynomial, we call A a minimal regular element. This form is called the rational canonical form of A.

Definition 2.11
For any polynomial f given by

Remark 2.12 For any
.
. Let ( f i ) i∈I be the different irreducible factors in the characteristic polynomial of A. Let (V i ) i∈I be the generalized eigenspace associated with each. Take i, j ∈ I, not necessarily different.
, then they are coprime to each other, and so Definition 2.14 (1) An operator A ∈ g will be called a simple split operator (or block) if the following conditions hold: • There is a possibly nonorthogonal decomposition V = V ′ ⊕ V ′ * .
• V ′ and V ′ * are isotropic, and the sesquilinear form B induces the natural pairing between them.

• The action of
• A ′ (and so also A ′′ ) is a minimal regular operator (see Definition 2.9).
• The irreducible factor f of the minimal polynomial of A ′ is not equal to f * .
(2) An operator A ∈ g will be called a simple nonsplit operator (or block) if: • It is a minimal regular operator.
• Its characteristic polynomial is not equal to x d with d even if g = o, and it is not equal to x d with d odd if g = sp.
(3) An operator A ∈ o will be called a simple even nilpotent operator (or block) if the following conditions hold: • Its minimal polynomial is x d for some even d.
• V has a basis of the form e, Ae, . . . , otherwise. (4) An operator A ∈ sp will be called a simple odd nilpotent operator (or block) if the following conditions hold: • Its minimal polynomial is x d for some odd d.
• V has a basis of the form e, Ae, . . . , The following useful proposition will be proved in Appendix A.

Proposition 2.15
Each A ∈ u decomposes as an orthogonal sum of simple split blocks and simple nonsplit blocks. Each A ∈ o decomposes as an orthogonal sum of simple split blocks, simple nonsplit blocks, and simple even nilpotent blocks. Each A ∈ sp decomposes as an orthogonal sum of simple split blocks, simple nonsplit blocks, and simple odd nilpotent blocks.

Remark 2.16
The contents of Proposition 2.15 are contained in known papers and books such as [11,15]. However, for clarity and completeness, we formulated only the propositions we need and give short proofs of them in Appendix A.
Consider also the transposition involution, which involves a choice of an isomorphism t ∶ V → V * , and sends (A, v, ϕ) to A t , ϕ t , v t . As immediate corollaries of [9, Theorem 3.1] (and of the proof that it implies Theorem 1.1 of the same paper), we have the following theorems.

Reformulations of the problem
Let V , G,G, χ be as in Section 2. Both Theorems 1.2 and 1.4 follow from the following theorem. The proof of the theorem is by induction on dim V , proving simultaneously the following theorem.

Remark 3.3
In [14], the needed induction basis was n = 1, n = 2, as the proof given there used the triviality of the center of G (up to ±1). However, we do not use this fact and so the trivial case n = 0 suffices for us as a basis for the induction.

Harish-Chandra descent
In this section, we use the technique of Harish-Chandra descent to restrict the support of an equivariant distribution as discussed in Theorems 3.1 and 3.2. For the course of this section assume that Theorems 3.1 and 3.2 hold for all smaller dimensions, over all finite field extensions of K.
Let (A, v) be a point in the support of a (G, χ)-equivariant distribution either on G × V (the group case) or on g × V (the Lie-algebra case). Let g(X) be the characteristic polynomial of A. Consider also the characteristic polynomial map Δ ∶ in the Lie algebra case). Note that g | g † in the group case, and g = ±g * in the Lie algebra case (recall Definition 2.11 of g † and g * ).
Theorem 4.1 Unless we are in the group case and G = SO, the polynomial g cannot be factorized into two coprime factors g 1 , g 2 satisfying g 1 | g † 1 and g 2 | g † 2 (resp. g 1 = ±g * 1 and g 2 = ±g * 2 in the Lie algebra case). In the case G = SO, it is still true that it is impossible for g to be divisible by both x − 1 and x + 1.
Proof We give the proof for the group case and for the Lie algebra case simultaneously. By the localization principle (Corollary 2.5), it is enough to show that there is no (G, χ)-equivariant distribution ξ on any of the fibers of Δ which is above a polynomial not satisfying the condition we gave on g. Let F be such a fiber lying above a polynomial g(x) = g 1 (x)g 2 (x) with g 1 , g 2 coprime and of positive degree, satisfying g 1 | g † 1 and g 2 | g † 2 (g 1 = ±g * 1 and g 2 = ±g * 2 in the Lie algebra case). If we are in the group case and G = SO, we further assume that g 1 (x) = (x − 1) k for some k > 0. Let d 1 , d 2 be the degrees of g 1 , g 2 . Given A with characteristic polynomial g(x), one may consider V 1 , V 2 , its generalized eigenspaces associated with g 1 (x), g 2 (x), respectively. By Lemma 2.13, V 1 , V 2 are perpendicular to each other. Consider There is a naturalG-equivariant map ρ ∶ F → Λ. Consider the stratification on Λ given by G-orbits. Note that these are the same asG-orbits. To show that this is indeed a stratification, we must show that there are finitely many G-orbits.
Recall that there are finitely many isomorphism classes of sesquilinear forms of the same type as B (symmetric, Hermitian, or symplectic) on a K-vector space of a given dimension. If two elements in Λ share the isomorphism classes of the restrictions of B to V 1 , V 2 , then these isomorphisms can be extended orthogonally to an element of G (in the case G = SO, it will only be an element of O. However, it is enough to prove that there are finitely many O orbits). This implies that the two elements we had in Λ are in the same G-orbit (O-orbit if G = SO). It follows that there are indeed finitely many G-orbits, and so partition into orbits is a stratification.
Let S be the union of strata intersecting ρ(supp(ξ)), and let Ω be a stratum of the largest dimension in it (we assume by contradiction that ξ ≠ 0, i.e., S is nonempty). It is open in S, and so we may restrict ξ to ρ −1 (Ω). Since Ω⊆S, this restriction is not the zero distribution.
For the following, assume that we are not in the groups case where G = SO. The action ofG on Ω is transitive by definition, and the stabilizer of a point in as a subgroup of index 2. Using Frobenius descent (Theorem 2.7) on ξ, we get an (H, χ)-equivariant distribution on the fiber, which is a closed subspace of (G Hence, this distribution isG(V 1 ) ×G(V 2 )-invariant by the induction hypothesis and Corollary 2.6 to the Localization Principle. In particular, it is also H-invariant; thus, it is 0, in contradiction to our assumption.
In the case G = SO, we have a similar situation. The action ofSO(V ) on Ω is transitive, and the stabilizer of a point in Ω, which we will call H, is a unimodular subgroup of index 4 insideÕ(V 1 ) ×Õ(V 2 ). This group H contains SO(V 1 ) × SO(V 2 ) as a subgroup of index 4, on which the character χ is trivial. Since the determinant of an operator acting on V 1 with characteristic polynomial g 1 (x) = (x − 1) k and on V 2 with characteristic polynomial g 2 (x) is 1, we get that g 2 (0) = (−1) dim V2 . If dim V 2 was odd, it would imply that g † 2 = −g 2 , and in particular g 2 (1) = 0. By assumption, this is not the case, and so we have that dim V 2 must be even. It follows that , because any element in it acts with characteristic polynomial (x − 1) k on V 1 , and thus has determinant 1 when restricted to it. It follows that the restriction to V 2 also has determinant 1. As before, we get that any (H, χ)-equivariant distribution on D. Mezer (using the localization principle and the induction hypothesis). In particular, it is γ-invariant, and thus it is 0, since χ(γ) = −1. Using Frobenius descent, we get that this implies ξ = 0, giving a contradiction. ∎ We give the following theorem only in the Lie algebra case as this is what will be used. However, it also holds in the group case, with the same proof. Proof Again, we use the localization principle. Let F be the fiber above a polynomial of the form g(x) = g 1 (x)g 2 (x) with g 2 = ±g * 1 , and the two are coprime to each other. (By Theorem 4.1, it is enough to consider this case.) Given A with characteristic polynomial g(x), one may consider V 1 , V 2 , its generalized eigenspaces associated with g 1 (x), g 2 (x), respectively. By Lemma 2.13, V 1 , V 2 are both isotropic. Consider There is a naturalG-equivariant map ρ ∶ F → Λ. To see that G acts transitively on Λ, take We may take E 2 to be the basis of V 2 dual to E 1 with respect to the pairing between V 1 , V 2 induced by B. Similarly, we may take E ′ 2 . The linear transformation which sends E 1 to E ′ 1 and E 2 to E ′ 2 preserves B, and thus it is an element of G. So the actions of both G andG on Λ are transitive, and the stabilizer insideG of a point in Λ is isomorphic toGL(V 1 ), which is a unimodular group. Using Frobenius descent (Theorem 2.7) on ξ, we get a (GL(V 1 ), χ)-equivariant distribution on the fiber, which is isomorphic to 1 . By Theorem 2.19, this distribution must be equal to 0, and so is the original one. ∎ We formulate the next theorem only for the Lie algebra case, and g = sp, although again it is also true for all the other cases.

Theorem 4.3
Consider the Lie algebra case of g = sp. In this case, the irreducible factor of g is either linear or inseparable.
Proof Again, we use the localization principle. Let F be the fiber above a polynomial g(x) = f (x) s with f irreducible, separable, of degree d > 1, and satisfying f * ≠ ± f . Given A with characteristic polynomial g(x), we may consider its additive Jordan decomposition into semisimple and unipotent parts, A s and A u (that is in virtue of the characteristic polynomial being separable). Let F s be the space of possible A s 's, that is the space of semisimple elements of g with characteristic polynomial g(x). We have aG-equivariant map θ ∶ F → F s . By [11], F s is a disjoint union of finitely many G-orbits, all of the same dimension. By . Fix a ∈ m such that σ a = −a (e.g., a = T ∈ F[T]/ f (T)). Then it follows from the above that aS(⋅, ⋅) is a nondegenerate Hermitian form on V m (with respect to the involution σ), where V m is V as a linear space over m. To say that a linear automorphism of V commutes with A is to say that it is m linear, and for such an automorphism to say that it is in G(V ) is to say that it preserves aS. Thus, we have that the centralizer of A in G(V ) can be described as U(V m ). Moreover, the stabilizer of A inG(V ) can be described asŨ(V m ). Moreover, the centralizer of A inside g(v) can be described as u(V m ).
Recall that we need to show that any This definition makes sense since, for A ∈ G (1) (V ), and similarly for C −1 .

Definition 5.4
Let g 0 be the subspace of g not having ±1 as eigenvalues.

Proposition 5.5
The maps C ±1 areG-homeomorphisms from G (±1) (V ) (respectively) to g 0 , unless we are in the case G = SO, considering C 1 , and dim V is even. In this case, Proof First, exclude the case G = SO. We will give the proof for ) is a locally constant function, compactly supported inside g 0 . Thus, we can extend g ⋅ ξ to a (G, χ)-equivariant distribution on g × V with (A 0 , v 0 ) in its support. In particular, this distribution is not 0, which creates a contradiction to our assumption. ∎

An important lemma and automorphisms
Proof We do not consider in the following the case g = sp, as in this case Γ = V and there is nothing to prove. This proof is the same as the proof of [3, Proposition 5.2]. The idea is to consider the map g × V → K given by (A, v) ↦ ⟨v, v⟩, and apply the localization principle (Corollary 2.5) to it to restrict to a fiber. Then apply Frobenius descent (Theorem 2.7) on the projection on the second coordinate, to reach a point where it is enough to show that any (G(V ′ ), χ)-equivariant distribution on g(V ′ ) is 0, for some subspace V ′ ⊆V of codimension 1. We have a decomposition g = g(V ′ ) ⊕ V ′ ⊕ E, with E being either a zero-or one-dimensional vector space over F with trivial G(V ′ )-action, and so we can use the induction hypothesis to finish. ∎ Denote by ϕ v the linear transformation u ↦ ⟨u, v⟩v.
The following definition will be relevant for the cases of u and sp.

Definition 6.2 For any
. This is an automorphism of g as a space with aG action.
The following definition will be relevant only for case of o.

Definition 6.3
For any λ ∈ F, define an automorphism of g × V by This is an automorphism of g × V as a space with aG action.
at a polynomial f. Recall that we must have f * (x) = (−1) n f (x). Choose a polynomial g ∈ K[x] coprime to f that also satisfies g * (x) = g(x)mod f (x). Then we can define the following definition.

Definition 6.4 Define an automorphism of F by ρ g (A, v) = (A, g(A)v).
To show that it is invertible, notice that there is an "inverse" polynomial g −1 such that g g −1 = 1mod f . It also satisfies (g −1 ) * (x) = g −1 (x)mod f (x), as for some polynomial a, The last being equal to (g −1 ) * (x)g(x) modulo f (x). This implies that we have ρ g −1 which is inverse to ρ g . To show that ρ h commutes with the action ofG, the only nontrivial part is to show that it commutes with the action of an element x ∈G/G. Consider x as an element of End F (V ) satisfying ⟨xu, xw⟩ = ⟨w, u⟩ for any u, w ∈ V . In particular, x satisfies axu = xāu for any a ∈ K, u ∈ V . To show commutation, we need to show that −x g(A)v = g(−xAx −1 )(−xv). This is true as

D. Mezer
For the last equation, we used the condition imposed on g, and the fact that f (A) = 0.
Thus, we get that ρ g is aG-automorphism of F. In the case g = sp, we give the following lemma by using the automorphisms ν λ to amplify the restriction of Theorem 4.3.

Lemma 6.5 Assume that
Proof Given A ∈ End(V ), write the characteristic polynomial of A as Denote also

Proposition 6.7 Any (G, χ)-equivariant distribution on g × V is supported on R.
Proof By the localization principle (Corollary 2.5), it is enough to show that any (G, χ)-equivariant distribution on a fiber F of Δ at a polynomial f is supported on R ∩ F (note that R isG-invariant). Let ξ be such a distribution, and let (A, v) be a point in supp(ξ). Let us start with the case of u.
Choose ω ∈ K × withω = −ω. Let g ∈ F[x] and consider g 1 (x) = g(x 2 ) and g 2 (x) = ωx g(x 2 ). They satisfy g 1 (−x) = g 1 (x), g 2 (−x) = g 2 (x). Choose g such that g 1 will be coprime to f. We can apply ρ g1 to ξ and extend back to g × V to get that by Lemma 6.1, ⟨g 1 (A)v, g 1 (A)v⟩ = 0. We know this for a Zariski dense subset of polynomials g ∈ F[x], and so for all g ∈ F[x]. The same goes for g 2 . So, in particular, Note that indeed it follows from A * = −A that ⟨A 2k v, v⟩ = ⟨v, A 2k v⟩ and that ⟨ωA 2k+1 v, v⟩ = ⟨v, ωA 2k+1 v⟩. Now, for the case of o, we still have g 1 , and the same proof as before shows that ⟨A 2k v, v⟩ = 0. However, it is always true that For the case of sp, we use the same technique but to the condition imposed from Lemma 6.5. This way we get that for a Zariski dense subset of F[x] (and thus for all g ∈ F[x]) that ⟨Ag(A 2 )v, g(A 2 )v⟩ = 0. From this, we are able to get Moreover, it is always true that Proof For Δ(ν λ (A, v)), this follows directly from Proposition 2.17. For Δ(μ λ (A, v)), this also follows from Proposition 2.17, but with an iterative use.
Since ⟨A k Av, v⟩ = 0 for all k ≥ 0, Δ(A) = Δ(A + Aϕ v ). To prove that we also have

Stratification
For any g ∈ K[x] which is a power of an irreducible polynomial, let Y g be the subspace of g consisting of elements with characteristic polynomial g. By the localization principle (Corollary 2.5), the previous reformulations, and Theorem 4.2, it is enough to prove that any (G, χ)-equivariant distribution on Δ −1 (g) = Y g × V is 0, for any g as above. Let us fix g and prove this claim for it. We proceed similarly to [3,9]. The strategy will be to stratify Y g and restrict stratum by stratum the possible support for a (G, χ)-equivariant distribution (note that Y g is D. Mezer a union of finitely manyG orbits). For the unitary case, choose ω ∈ K s.t.ω = −ω (in the symplectic case denote ω = 1). For λ ∈ F, denote by η λ either ν λω or μ λ , depending on which case we are in. Notation 7.1 Denote by P i (g) the union of allG-orbits of Y g of dimension at most i, and let R i (g) ∶= R ∩ (P i (g) × V ). Moreover, for any openG-orbit O of P i (g), set Note that P i (g) are Zariski closed inside Y g , P k (g) = Y g for k big enough, and P −1 (g) = ∅.
We denote by F V the Fourier transform on V with respect to the nondegenerate Fbilinear form (u, v) ↦ tr K/F (⟨u, v⟩). It will also be used to denote the partial Fourier transform on V when applied to X × V for some space X. In the cases of g = o and g = u, F V commutes with the action ofG. In the case of g = sp, it is not true. Instead, the action onG after applying F V is compatible with the action ofG on V by (g, δ).v ∶= gv (recall that the usual action ofG on V is by (g, δ).v ∶= δgv). Since −1 ∈ Sp, we still have that Fourier transform maps S * (X × V ) H,τ into itself for any X⊆sp, any subgroup H ofSp containing −1, and any τ ∈ {1, χ}. Then ξ = 0. This claim will be proved in the next section. Let us now show how it implies the main theorems. Recall that Theorem 3.2, which states that any (G, χ)-equivariant distribution on g × V is 0, implies Theorems 1.2 and 1.4. This is by virtue of Theorem 5.7 and what is shown in Section 3.

Proof of Theorem 3.2
We prove the following claim by downward induction-any (G, χ)-equivariant distribution on Δ −1 (g) is supported inside R i (g). This claim for i big enough follows from Proposition 6.7, and the claim for i = −1 implies Theorem 3.2 by the localization principle (Corollary 2.5) and Proposition 4.2, as already explained in the top of this section. For the induction step, take such a distribution ξ. As P i (g)/P i−1 (g) is a disjoint union of open orbits, it is enough to show that the restriction of ξ to any O × V , where O is an open orbit of P i (g), is zero. Let ζ = ξ| O×V be such a restriction. By the induction hypothesis applied to η λ (ξ), we know that supp(ζ)⊆Õ and similarly supp(F V (ζ))⊆Õ. Hence, by Claim 7.2, ζ = 0. ∎ 8 Handling a single stratum-proof of Claim 7.2

Nice operators
This subsection closely follows [3, Section 6] and [9, Section 4.3], but we give it here for completeness.

Notation 8.1
For A ∈ gl(V ), set, in the cases g = u, sp, In the case g = o, set Here, [B, C] ∶= BC − CB is the Lie bracket, and [A, ] in the orthogonal case).  Thus, theG-orbit of A is equal to its G-orbit. It is known that C A is unimodular, and henceC A is also unimodular. Claim 7.2 follows now from Frobenius descent (Theorem 2.7), Proposition 8.2, and the following proposition. Proposition 8. 6 Let A ∈ g. Let η ∈ S * (V ) CA . Suppose that both η and F V (η) are supported in Q A . Then η ∈ S * (V )C A .

Definition 8.7
Call an element A ∈ g "nice" if the previous proposition holds for A. Namely, A is "nice" if any distribution η ∈ S * (V ) CA such that both η and F(η) are supported in Q A is alsoC A -invariant.

A "simple" operator is nice
Using the classification of Proposition 2.15, we need to check that simple nonsplit, simple even nilpotent, and simple odd nilpotent blocks are nice (recall that we assumed the characteristic polynomial of our original operator to be a power of an irreducible polynomial, and thus we need not check simple split operators). Let A be a block of one of these types. Let s = (T, −1) be an element ofC A with χ(s) = −1. We have A = s.A = −TAT −1 , and so TA = −AT. We need to prove the following claim for each of the possible block types.

Claim 8.9
Let ξ be a C A -invariant distribution on V, such that both ξ and F(ξ) are supported on Q A . Then ξ is also s-invariant.
We shall prove this claim in the following subsections. This claim implies Claim 7.2.

Simple nonsplit blocks
Assume that A is a simple nonsplit block with minimal polynomial f d , f irreducible, and f * = ± f . If we are in the case g = o, assume also that f (x) ≠ x. We know by Proposition 8.3 that Q A ⊆R A . Consider the self-dual increasing filtration V i = ker f (A) i . One can easily see that R A = V ⌊d/2⌋ . The fact that F(ξ) is supported on V ⌊d/2⌋ means that ξ is invariant to shifts by (V ⌊d/2⌋ ) ⊥ = V ⌈d/2⌉ . Now, consider two cases: (1) d is odd. Then V ⌊d/2⌋ ⊊ V ⌈d/2⌉ . Choosing a vector v ∈ V ⌈d/2⌉ /V ⌊d/2⌋ , we get that ξ is the same as ξ shifted by v, and that it is supported on V ⌊d/2⌋ ∩ (v + V ⌊d/2⌋ ) = ∅, and thus ξ = 0.
So s multiplies ζ by a constant c, which is positive, because s preserves the positivity of the Lebesgue measure. Since s 2 ∈ C A , we have by assumption s 2 ζ = ζ. So unless ζ = 0, c 2 = 1, hence by positivity c = 1, and we are done.

Simple nonsplit nilpotent blocks in the orthogonal case
Note that a simple nonsplit nilpotent block is such that V has a basis of the form e, Ae, . . . , A d−1 e, the minimal polynomial of A is equal to x d , and for some nonzero constant c ∈ F, Note that this implies that d must be odd. Let A be such a block. Denote Then, by Theorem 8.3, Since it is enough to prove our claim for any valid choice of s = (T, −1) ∈C A , we may simply take T(A i e) = (−1) (d+1)/2+i A i e. Then s acts on V by A i e ↦ (−1) (d−1)/2+i A i e. The spaces V 1 , V 2 , V 3 are s invariant, and s acts on V 2 by identity. It is also clear that dv 3 , δ 1 are s-invariant. Thus, ξ is s invariant, and we are done.

Simple even nilpotent blocks
The last equation following from the fact that A preserves E and F. From this, it follows that, for any k ≥ 0, However, So this expression is equal to half of the left-hand side of the previous equation, and so However, Thus, if ξ is a distribution as in the statement of Claim 8.9, it must be supported on E 2 ⊕ F 2 and invariant to translations by (E 2 ⊕ F 2 ) ⊥ = E 2 ⊕ F 2 . Thus, it is equal to a multiple of the Lebesgue measure on E 2 ⊕ F 2 . As it is enough to prove the claim for any specific choice of s = (T, −1) ∈C A , we can choose T to be For this choice, s acts by A i e ↦ (−1) i+1 A i e, A i f ↦ (−1) i A i f , and so fixes ξ as desired.

Simple odd nilpotent blocks
So U m splits as an A-invariant orthogonal direct summand of V. The restriction of A to U m is a homogeneous block because . It is an A-invariant space, to which the restriction of the form B is nondegenerate. It is also generated by one vector (v). By induction, we are done. In the symplectic case which is not the case excluded, we have a similar proof. Again, define a bilinear form on U ∶= V / f (A)V . If the minimal polynomial is x d for d even, define as before ⟨u, v⟩ U ∶= ⟨A d−1 u, v⟩, and it will be a symmetric form. Otherwise, A is invertible and we set ⟨u, v⟩ U ∶= ⟨Af (A) d−1 u, v⟩. Again this bilinear form is symmetric. The rest of the proof follows the same, with noticing that ⟨Af (A) d−1 v, v⟩ ≠ 0 implies that V 0 ∩ V ⊥ 0 = 0 for V 0 = Span(v, Av, A 2 v, . . . ). In the orthogonal case, if the minimal polynomial is x d for d even (resp. the symplectic case and d odd), the bilinear form ⟨u, v⟩ U ∶= ⟨A d−1 u, v⟩ on U is skew-symmetric. Take u 1 , u 2 ∈ U s.t. ⟨u 1 , u 2 ⟩ U = 1. Lift them to v 1 , v 2 ∈ V , and Take V 0 = Span(v 1 , Av 1 , A 2 v 1 , . . . ) ⊕ Span(v 2 , Av 2 , A 2 v 2 , . . . ) (note that this sum is indeed direct). V 0 is A-invariant, and V ⊥ 0 ∩ V 0 = 0. Now, we are left to show that V 0 is a simple even nilpotent block (resp. simple odd nilpotent block). The first step is to show that we can alter the lifts of u 1 , u 2 to v 1 , v 2 (from V 0 /AV 0 to V 0 ) such that ⟨A j v 1 , v 1 ⟩ = 0 for all j. (We will show it for v 1 , but it is exactly the same for v 2 .) For odd (resp. even) j, this holds automatically. We assume for the following that v 2 is any lift of D. Mezer u 2 (the important property is that ⟨A d−1 v 1 , v 2 ⟩ = 1). If m is the minimal integer such that ⟨A d−m v 1 , v 1 ⟩ ≠ 0, we can add a multiple of A m−1 v 2 to v 1 to fix that, as for any i < m, and Notice that m must be even and m ≥ 2. So, by applying this consecutively, we may change the lift of v 1 (and similarly v 2 ) in the desired way (notice that indeed we changed it only by vectors in AV 0 ). Now, all that is left is to again change v 1 , v 2 so that ⟨A k v 1 , v 2 ⟩ = 0 for any k < d − 1. For this, simply choose a vectorṽ 2 which is orthogonal to v 1 , Av 1 , A 2 v 1 , . . . , A d−2 v 1 , v 2 , Av 2 , . . . , A d−1 v 2 and such that ⟨A d−1 v 1 , v 2 ⟩ = 1. Note that the vector v 2 −ṽ 2 is perpendicular to the subspace V 2 ∶= Span(v 2 , Av 2 , . . . , A d−1 v 2 ), and so v 2 −ṽ 2 ∈ V ⊥ 2 = V 2 . This implies that, for all k ≥ 0, ⟨A kṽ 2 ,ṽ 2 ⟩ = 0. Moreover, v 2 −ṽ 2 is perpendicular to A d−1 V 0 , and so v 2 −ṽ 2 ∈ AV 2 . Thus, we can replace v 2 withṽ 2 , and all of the needed conditions will be satisfied. ∎ The above immediately implies Proposition 2.15.

B On the centralizers inSO
In this appendix, we prove Theorem 8.5 for the case of G = SO(V ). We need to show that there exists an element T ∈ O(V ) such that TAT −1 = −A and det T = (−1) ⌊ n+1 2 ⌋ . Assume the decomposition of Proposition 2.15. It is enough to prove Theorem 8.5 for each of the blocks, as then taking the direct sum of the elements T i found gives an element T ∈ O(V ) with TAT −1 = −A. If all of the dimensions of the blocks n i are even, then det(⊕ T i ) = ∏ det T i = (−1) ∑ n i /2 = (−1) n/2 . Otherwise, there is an odd block, and by replacing T i by −T i if needed, we can control the sign of the determinant to be as we wish. Now, we need to check each of the simple block types.

B.1 Simple split blocks
We have V = V ′ ⊕ V ′ * , with the natural symmetric bilinear form coming from the pairing. A = [ A ′ 0 0 −A ′ * ]. By the well-known theorem claiming that any square matrix (over any field and of any dimension) is conjugate to its transpose, there is an iso- we get Furthermore, det T = (−1) dim V ′ = (−1) dim V /2 .

B.2 Simple nonsplit blocks
We Clearly, TAT −1 = −A. Moreover, det T = (−1) ⌊n/2⌋ , which is what we wanted in the nonnilpotent case (where n is even), and in the nilpotent case (where n is odd), we may replace T by −T if needed, in order to achieve the desired sign of the determinant of T.