Linear PDE with Constant Coefficients

We discuss practical methods for computing the space of solutions to an arbitrary homogeneous linear system of partial differential equations with constant coefficients. These rest on the Fundamental Principle of Ehrenpreis-Palamodov from the 1960s. We develop this further using recent advances in computational commutative algebra.


Introduction
Our calculus class taught us how to solve ordinary differential equations (ODE) of the form Here we seek functions φ = φ(z) in one unknown z. The ODE is linear of order m, it has constant coefficients c i ∈ C, and it is homogeneous, meaning that the right hand side is zero. The set of all solutions is a vector space of dimension m. A basis consists of m functions φ(z) = z a · exp(u i z).
Here u i is a complex zero with multiplicity larger than a ∈ N of the characteristic polynomial p(x) = c 0 + c 1 x + c 2 x 2 + · · · + c m x m .
Thus solving the ODE (1) means finding all the zeros of (3) and their multiplicities. We next turn to a partial differential equation (PDE) for functions φ : R 2 → R that is familiar from the undergraduate curriculum, namely the one-dimensional wave equation where c ∈ R\{0}.
D'Alembert found in 1747 that the general solution is the superposition of traveling waves, φ(z, t) = f (z + ct) + g(z − ct), where f and g are twice differentiable functions in one variable. For the special parameter value c = 0, the PDE (4) becomes φ tt = 0, and the general solution has still two summands φ(z, t) = f (z) + t · h ′ (z).
We get this from (5) by replacing g(z − ct) with 1 2c (h(z+ct) − h(z−ct)) and taking the limit c → 0. Here, the role of the characteristic polynomial (3) is played by the quadratic form The solutions (5) and (6) mirror the algebraic geometry of the conic {q c = 0} for any c ∈ R.
Here, a is any function in three variables, b, c, d, e, f, g, h are functions in two variables, and ψ = (α,β) is any function R 4 → C 2 that satisfies the following four linear PDE of first order: We note that all solutions to (10) also satisfy (8), and they admit the integral representatioñ α = t(exp(s 2 x+sty+t 2 (z+w)))dµ(s, t) ,β = − s(exp(s 2 x+sty+t 2 (z+w)))dµ(s, t), (11) where µ is a measure on C 2 . All functions in (9) are assumed to be suitably differentiable.
Our aim is to present methods for solving arbitrary systems of homogeneous linear PDE with constant coefficients. The input is a system like (1), (4), (8) or (10). We seek to compute the corresponding output (2), (5), (9) or (11) respectively. We present techniques that are based on the Fundamental Principle of Ehrenpreis and Palamodov, as discussed in the classical books [7,17,23,32]. We utilize the theory of differential primary decomposition [12]. While deriving (5) from (4) is easy by hand, getting from (8) to (9) requires a computer.
This article is primarily expository. One goal is to explain the findings in [8,9,10,11,12], such as the differential primary decompositions of minimal size, from the viewpoint of analysis and PDE. In addition to these recent advances, our development rests on a considerable body of earlier work. The articles [15,29,31] are especially important. However, there are also some new contributions in the present article, mostly in Sections 4, 5 and 6. We describe the first universally applicable algorithm for computing Noetherian operators.
This presentation is organized as follows. Section 2 explains how linear PDE are represented by polynomial modules. The Fundamental Principle (Theorem 2.2) is illustrated with concrete examples. In Section 3 we examine the support of a module, and how it governs exponential solutions (Proposition 3.7) and polynomial solutions (Proposition 3.9). Theorem 3.8 characterizes PDE whose solution space is finite-dimensional. Section 4 features the theory of differential primary decomposition [9,12]. Theorem 4.4 shows how this theory yields the integral representations promised by Ehrenpreis-Palamodov. This result appeared implicitly in the analysis literature, but the present algebraic form is new. It is the foundation of our algorithm for computing a minimal set of Noetherian multipliers. This is presented in Section 5, along with its implementation in the command solvePDE in Macaulay2 [21].
The concepts of schemes and coherent sheaves are central to modern algebraic geometry. In Section 6 we argue that linear PDE are an excellent tool for understanding these concepts, and for computing their behaviors in families. Hilbert schemes and Quot schemes make an appearance along the lines of [9,11]. Section 7 is devoted to directions for further study and research in the subject area of this paper. It also features more examples and applications.

PDE and Polynomials
Our point of departure is the observation that homogeneous linear partial differential equations with constant coefficients are the same as vectors of polynomials. The entries of the vectors are elements in the polynomial ring R = K[∂ 1 , ∂ 2 , . . . , ∂ n ], where K is a subfield of the complex numbers C. In all our examples we use the field K = Q of rational numbers. This has the virtue of being amenable to exact symbolic computation, e.g. in Macaulay2 [21].
For instance, in (1), we have n = 1. Writing ∂ = ∂ ∂z for the generator of R, our ODE is given by one polynomial p(∂) = c 0 + c 1 ∂ + · · · + c m ∂ m , where c 0 , c 1 , . . . , c m ∈ K. For n ≥ 2, we write z = (z 1 , . . . , z n ) for the unknowns in the functions we seek, and the partial derivatives that act on these functions are ∂ i = ∂ z i = ∂ ∂z i . With this notation, the wave equation in (4) corresponds to the polynomial q c (∂) = ∂ 2 2 − c 2 ∂ 2 1 = (∂ 2 − c∂ 1 )(∂ 2 + c∂ 1 ) with n = 2. Finally, the PDE in (8) has n = 4 and is encoded in three polynomial vectors and The system (8) corresponds to the submodule of R 2 that is generated by these three vectors.
We shall study PDE that describe vector-valued functions from n-space to k-space. To this end, we need to specify a space F of sufficiently differentiable functions such that F k contains our solutions. The scalar-valued functions in F are either real-valued functions ψ : Ω → R or complex-valued functions ψ : Ω → C, where Ω is a suitable subset of R n or C n . Later we will be more specific about the choice of F . One requirement is that the space F k should contain the exponential functions q(z) · exp(u t z) = q(z 1 , . . . , z n ) · exp(u 1 z 1 + · · · + u n z n ).
Here u ∈ C n and q is any vector of length k whose entries are polynomials in n unknowns.
Remark 2.1 (k = 1). A differential operator p(∂) in R annihilates the function exp(u t z) if and only if p(u) = 0. This is the content of [27,Lemma 3.25]. See also Lemma 3.6. If p(∂) annihilates a function q(z) · exp(u t z), where q is a polynomial of positive degree, then u is a point of higher multiplicity on the hypersurface {p = 0}. In the case n = 1, when p is the characteristic polynomial (3), we have a solution basis of exponential functions (2).
Another requirement for the space F is that it is closed under differentiation. In other words, if φ = φ(z 1 , . . . , z n ) lies in F then so does ∂ i • φ = ∂φ ∂z i for i = 1, 2, . . . , n. The elements of F k are vector-valued functions ψ = ψ(z). Their coordinates ψ i are scalar-valued functions in F . All in all, F should be large, in the sense that it furnishes enough solutions. Formulated algebraically, we want F to be an injective R-module [25]. A more precise desideratum, formulated by Oberst [28,29,30], is that F should be an injective cogenerator.
Examples of injective cogenerators include the ring C[[z 1 , . . . , z n ]] of formal power series, the space C ∞ (R n ) of smooth complex-valued functions over R n , or more generally, the space D ′ (R n ) of complex-valued distributions on R n . If Ω is any open convex domain in R n then we can also take F to be C ∞ (Ω) or D ′ (Ω). In this paper we focus on algebraic methods. Analytic difficulties are mostly swept under the rug.
Our PDE are elements in the free R-module R k , that is, they are column vectors of length k whose entries are polynomials in ∂ = (∂ 1 , . . . , ∂ n ). Such a vector acts on F k by coordinate-wise application of the differential operator and then adding up the results in F . In this manner, each element in R k defines an R-linear map F k → F . For instance, the third vector in (12) is an element in R 2 that acts on functions ψ : R 4 → C 2 in F 2 as follows: The right hand side is a scalar-valued function R 4 → C, that is, it is an element of F . Our systems of PDE are submodules M of the free module R k . By Hilbert's Basis Theorem, every module M is finitely generated, so we can write M = image R (A), where A is a k × l matrix with entries in R. Each column of A is a generator of M and it defines a differential operator that maps F k to F . The solution space to the PDE given by M equals It suffices to take the operators m from a generating set of M, such as the l columns of A. The case k = 1 is of special interest, since we often consider PDE for scalar-valued functions.
In that case, the submodule is an ideal in the polynomial ring R and we denote this by I. The solution space Sol(I) of the ideal I ⊆ R is the set of functions φ in F such that p(∂) • φ = 0 for all p ∈ I. Thus ideals are instances of modules, with their own notation. The solution spaces Sol(M) and Sol(I) are C-vector spaces and R-modules. Indeed, any C-linear combination of solutions is again a solution. The R-module action means applying the same differential operator p(∂) to each coordinate, which leads to another vector in F k . This action takes solutions to solutions because the ring of differential operators with constant coefficients R = C[∂ 1 , . . . , ∂ n ] is commutative.
The purpose of this paper is to present practical methods for the following task: Given a k × l matrix A with entries in R = K[∂ 1 , . . . , ∂ n ], compute a good representation for the solution space Sol(M) of the module M = image R (A).
If k = 1 then we consider the ideal I generated by the entries of A and we compute Sol(I). This raises the question of what a "good representation" means. The formulas in (2), (5), (9) and (11) are definitely good. They guide us to what is desirable. Our general answer stems from the following important result at the crossroads of analysis and algebra. It involves two sets of unknowns z = (z 1 , . . . , z n ) and x = (x 1 , . . . , x n ). Here x gives coordinates on certain irreducible varieties V i in C n that are parameter spaces for solutions. Our solutions ψ are functions in z. We take F = C ∞ (Ω) where Ω ⊂ R n is open, convex, and bounded. Theorem 2.2 (Ehrenpreis-Palamodov Fundamental Principle). Consider a module M ⊆ R k , representing linear PDE for a function ψ : Ω → C k . There exist irreducible varieties V 1 , . . . , V s in C n and finitely many vectors B ij of polynomials in 2n unknowns (x, z), all independent of the set Ω, such that any solution ψ ∈ F admits an integral representation Here m i is a certain invariant of (M, V i ) and each µ ij is a bounded measure supported on the variety V i .  [32]. Other references with different emphases include [5,25,29]. For a perspective from commutative algebra see [11,12].
In the next sections we will study the ingredients in Theorem 2.2. Given the module M, we compute each associated variety V i , the arithmetic length m i of M along V i , and the Noetherian multipliers B i,1 , B i,2 , . . . , B i,m i . We shall see that not all n of the unknowns z 1 , . . . , z n appear in the polynomials B i,j but only a subset of codim(V i ) of them.
The most basic example is the ODE in (1), with l = n = k = 1. Here V i = {u i } is the ith root of the polynomial (3), which has multiplicity m i , and B i,j = z j−1 . The measure µ ij is a scaled Dirac measure on u i , so the integrals in (17) are multiples of the basis functions (2).
In light of Theorem 2.2, we now refine our computational task in (16) as follows: Given a k × l matrix A with entries in R = K[∂ 1 , . . . , ∂ n ], compute the varieties V i and the Noetherian multipliers B ij (x, z). This encodes Sol(M) for M = image R (A).
In our introductory examples we gave formulas for the general solution, namely (5) and (9). We claim that such formulas can be read off from the integrals in (17). For instance, for the wave equation (4), we have s = 2, B 1,1 = B 1,2 = 1, and (5) is obtained by integrating exp(x t z) against measures dµ i1 (x) on two lines V 1 and V 2 in C 2 . For the system (8), we find s = 6, with m 1 = m 2 = m 3 = 1 and m 4 = m 5 = m 6 = 2, and the nine integrals in (17) translate into (9). We shall explain such a translation in full detail for two other examples.
for a scalar-valued function φ = φ(z 1 , z 2 , z 3 ). This is [10,Example 4.2]. A Macaulay2 computation as in Section 5 shows that s = 1, m 1 = 4. It reveals the Noetherian multipliers Arbitrary functions f (z 2 ) = exp(tz 2 )dt are obtained by integrating against suitable measure on the line V 1 = {(0, t, 0) : t ∈ C} ⊂ C 3 . Their derivatives are found by differentiating under the integral sign, namely f ′ (z 2 ) = t · exp(tz 2 )dt. Consider four functions a, b, c, d, each specified by a different measure. Thus the sum of the four integrals in (17) evaluates to According to Ehrenpreis-Palamodov, this sum is the general solution of the PDE (19).
Our final example uses concepts from primary decomposition, to be reviewed in Section 3.
Example 2.4 (n = 4, k = 2, l = 3). Let M ⊂ R 4 be the module generated by the columns of Computing Sol(M) means solving ∂ 2 ψ 1 We apply Theorem 2.2 to derive the general solution to (21). The module M has six associated primes, namely Four of them are minimal and two are embedded. We find that m 1 = m 2 = m 3 = m 4 = m 6 = 1 and m 5 = 4. A minimal primary decomposition is given by the following primary submodules of R 4 , each of which contains M: The number of Noetherian multipliers B ij is 6 i=1 m i = 9. We choose them to be These nine vectors describe all solutions to our PDE. For instance, B 3,1 gives the solutions α(z 1 , z 4 ) 0 , and B 5,1 gives the solutions , where α, β are bivariate functions.
Furthermore B 1,1 and B 6,1 encode the two families of solutions mentioned after (21).
For the latter, we note that V 6 = V (P 6 ) is the surface in C 4 with parametric representation (x 1 , x 2 , x 3 , x 4 ) = (s 2 t, st 2 , s 3 , t 3 ) for s, t ∈ C. This surface is the cone over the twisted cubic curve, in the same notation as in [11,Section 1]. The kernel under the integral in (17) equals This is a solution to M 6 , and hence to M, for any values of s and t. Integrating the left hand side over x ∈ V 6 amounts to integrating the right hand side over (s, t) ∈ C 2 . Any such integral is also a solution. Ehrenpreis-Palamodov tells us that these are all the solutions.

Modules and Varieties
Our aim is to offer practical tools for solving PDE. The input is a k×l matrix A with entries in R = K[∂ 1 , . . . , ∂ n ], and M = image R (A) is the corresponding submodule of R k = k j=1 Re j . The output is the description of Sol(M) sought in (18). That description is unique up to basis change, in the sense of [12,Remark 3.8], by the discussion in Section 4. Our method is implemented in a Macaulay2 command, called solvePDE and to be described in Section 5.
We now explain the ingredients of Theorem 2.2 coming from commutative algebra (cf. [18]). For a vector m ∈ R k , the quotient (M : m) is the ideal {f ∈ R : f m ∈ M}. A prime ideal P i ⊆ R is associated to M if there exists m ∈ R k such that (M : m) = P i . Since R is Noetherian, the list of associated primes of M is finite, say P 1 , . . . , P s . If s = 1 then the module M is called primary or P 1 -primary. A primary decomposition of M is a list of primary submodules Primary decomposition is a standard topic in commutative algebra. It is usually presented for ideals (k = 1), as in [27,Chapter 3]. The case of modules is analogous. The latest version of Macaulay2 has an implementation of primary decomposition for modules, as described in [9,Section 2]. Given M, the primary module M i is not unique if P i is an embedded prime.
The contribution of the primary module M i to M is quantified by a positive integer m i , called the arithmetic length of M along P i . To define this, we consider the localization (R P i ) k /M P i . This is a module over the local ring R P i . The arithmetic length is the length of the largest submodule of finite length in ( The sum m 1 + · · · + m s is an invariant of the module M, denoted amult(M), and known as the arithmetic multiplicity of M. These numbers can be computed in Macaulay2 as in [12,Remark 5.1]. We return to these invariants in Theorem 4.3.
To make the connection to Theorem 2.2, we set V i = V (P i ) for i = 1, 2, . . . , s. Thus, V i is the irreducible variety in C n defined by the prime ideal P i in R = K[∂ 1 , . . . , ∂ n ]. The integer m i is an invariant of the pair (M, V i ): it measures the thickness of the module M along V i .
By taking the union of the irreducible varieties V i we obtain the variety Algebraists refer to V (M) as the support of M, while analysts call it the characteristic variety of M. The support is generally reducible, with ≤ s irreducible components. For instance, the module M in Example 2.4 has six associated primes, and an explicit primary decomposition was given in (22). However, the support V (M) has only four irreducible components in C 4 , namely one hyperplane, two 2-dimensional planes, and one nonlinear surface (twisted cubic). The relationship between modules and ideals mirrors the relationship between PDE for vector-valued functions and related PDE for scalar-valued functions. To pursue this a bit further, we now define two ideals that are naturally associated with a given module M ⊆ R k .
The first ideal is the annihilator of the quotient module R k /M = coker R (A), which is The second is the zeroth Fitting ideal of R k /M, which is the ideal in R generated by the k × k minors of the presentation matrix A. It is independent of the choice of A, and we write We are interested in the affine varieties in C n defined by these ideals. They are denoted by V (I) and V (J) respectively. The following is a standard result in commutative algebra.
Proposition 3.2. The three varieties above are equal for every submodule M of R k , that is, Proof. This follows from [18,Proposition 20.7].
Remark 3.3. It can happen that rank(A) < k, for instance when k > l. In that case, Geometrically, the module M furnishes a coherent sheaf that is supported on the entire space C n . For instance, let k = n = 2, l = 1 and A = ∂ 1 −∂ 2 . The PDE asks for pairs (ψ 1 , ψ 2 ) such that ∂ψ 1 /∂z 1 = ∂ψ 2 /∂z 2 . We see that Sol(M) consists of all pairs ∂α/∂z 2 , ∂α/∂z 1 , where α = α(z 1 , z 2 ) runs over functions in two variables. In general, the left kernel of A furnishes differential operators for creating solutions to M.
The following example shows that (23) is not true at the level of schemes (cf. Section 6).
and M the submodule of R 3 given by The sets of associated primes are The support V (M) is a plane in 3-space, on which I and J define different scheme structures. Our module M defines a coherent sheaf on that plane that lives between these two schemes. We consider the PDE in each of the three cases, we compute the Noetherian multipliers, and from this we derive the general solution. To begin with, functions in Sol(J) have the form The first two terms give functions in the subspace Sol(I). Elements in Sol(M) are vectors These represent all functions C 3 → C 3 that satisfy the five PDE given by the matrix A.
Remark 3.5. The quotient R/I embeds naturally into the direct sum of k copies of R k /M, via 1 → e j . This implies Ass(I) ⊆ Ass(M). It would be worthwhile to understand how the differential primary decompositions of I, J and M are related, and to study implications for the solution spaces Sol(I), Sol(J) and Sol(M). What relationships hold between these?

Proof. Let a ij (∂) denote the entries of the matrix A(∂). Then (24) holds if and only if
This is equivalent to k i=1 c i a ij (u) exp(u 1 z 1 + · · · + u n z n ) = 0 for all j = 1, . . . , l.
This condition holds if and only if (c 1 , . . . , c k ) · A(u) is the zero vector in C l . We conclude that, for any given u ∈ C n , the previous condition is satisfied for some c ∈ C k \{0} if and only if rank(A(u)) < k if and only if u ∈ V (M) = V (I). Here we use Proposition 3.2.
Here is an alternative way to interpret the characteristic variety of a system of PDE: Proposition 3.7. The solution space Sol(M) contains an exponential solution q(z)·exp(u t z) if and only if u ∈ V (M). Here q is some vector of k polynomials in n unknowns, as in (13).
Proof. One direction is clear from Lemma 3.6. Next, suppose q(z) exp(u t z) ∈ Sol(M). The partial derivative of this function with respect to any unknown z i is also in Sol(M). Hence Hence the exponential function ( The solution space Sol(M) to a submodule M ⊆ R k is a vector space over C. It is infinite-dimensional whenever V (M) is a variety of positive dimension. This follows from Lemma 3.6 because there are infinitely many points u in V (M). However, if V (M) is a finite subset of C n , then Sol(M) is finite-dimensional. This is the content of the next theorem. Proof. This is the main result in Oberst's article [30], proved in the setting of injective cogenerators F . The same statement for F = C ∞ (Ω) appears in [7, Ch. 8, Theorem 7.1]. The scalar case (k = 1) is found in [27,Theorem 3.27]. The proof given there uses solutions in the power series ring, which is an injective cogenerator, and it generalizes to modules.
By a polynomial solution we mean a vector q(z) whose coordinates are polynomials. It is an interesting problem to identify polynomial solutions when V (M) is no longer finite, and to decide whether these are dense in the infinite-dimensional space of all solutions. Here "dense" refers to the topology on F used by Lomadze in [26]. The following result gives an algebraic characterization of the closure in Sol(M) of the subspace of polynomial solutions. The result gives rise to algebraic algorithms for answering analytic questions about a system of PDE. The property in the first sentence can be decided by running the primary decomposition algorithm in [9]. For the second sentence, we need to compute a double saturation as above. This can be carried out in Macaulay2 as well.

Differential Primary Decomposition
We now shift gears and pass to a setting that is dual to the one we have seen so far. Namely, we discuss differential primary decompositions [9,12]. That duality is subtle and can be confusing at first sight. To mitigate this, we introduce new notation. We set x i = ∂ i = ∂ z i for i = 1, . . . , n. Thus R is now the polynomial ring K[x 1 , . . . , x n ]. This is the notation we are used to from algebra courses (such as [27]). We write ∂ x 1 , . . . , ∂ xn for the differential operators corresponding to x 1 , . . . , x n . Later on, we also identify z i = ∂ x i , and we think of the unknowns x and z in the multipliers B i (x, z) as dual in the sense of the Fourier transform.
The ring of differential operators on the polynomial ring R is the Weyl algebra The 2n generators commute, except for the n relations ∂ x i x i − x i ∂ x i = 1, which expresses the Product Rule from Calculus. Elements in the Weyl algebra D n are linear differential operators with polynomial coefficients. We write δ • p for the result of applying δ ∈ D n to a n denote the k-tuples of differential operators in D n . These operate on the free module R k as follows: Fix a submodule M of R k and let P 1 , . . . , P s be its associated primes, as in Section 3. A differential primary decomposition of M is a list A 1 , . . . , A s of finite subsets of D k n such that This is a membership test for the module M using differential operators. This test is geometric since the polynomial δ • m lies in P i if and only if it vanishes on the variety V i = V (P i ).
Theorem 4.1. Every submodule M of R k has a differential primary decomposition. We can choose the sets A 1 , . . . , A s such that |A i | is the arithmetic length of M along the prime P i .
Proof and discussion. The result is proved in [12] and further refined in [9]. These sources also develop an algorithm. We shall explain this in Section 5, along with a discussion of the Macaulay2 command solvePDE, which computes differential primary decompositions.
The differential operators in A 1 , . . . , A s are known as Noetherian operators in the literature; see [10,11,15,31]. Theorem 4.1 says that we can find a collection of amult(M) = m 1 + · · · + m s Noetherian operators in D k n to characterize membership in the module M.
Remark 4.2. The construction of Noetherian operators is studied in [7,8,10,11,23,31]. Some of these sources offer explicit methods, while others remain at an abstract level. All previous methods share one serious shortcoming, namely they yield operators separately for each primary component M i of M. They do not take into account how one primary component is embedded into another. This leads to a number of operators that can be much larger than amult(M). We refer to [12,Example 5.6] for an instance from algebraic statistics where the previous methods require 1044 Noetherian operators, while amult(M) = 207 suffice.
While Theorem 4.1 makes no claim of minimality, it is known that amult(M) is the minimal number of Noetherian operators required for a differential primary decomposition of a certain desirable form. To make this precise, we begin with a few necessary definitions.
For any given subset S of {x 1 , . . . , x n }, the relative Weyl algebra is defined as the subring of the Weyl algebra D n using only differential operators corresponding to variables not in S: and is maximal with this property. Thus, S i is a maximal independent set of coordinates on the irreducible variety V (P i ). Equivalently, S i is a basis of the algebraic matroid defined by the prime P i ; cf. [27,Example 13.2]. The cardinality of S i equals the dimension of V (P i ).
Theorem 4.3. The differential primary decomposition in Theorem 4.1 can be chosen so that The arithmetic length of M along P i is a lower bound for the cardinality of A i in any differential primary decomposition of M such that A i ⊂ D n (S i ) k for i = 1, . . . , s.
Proof and discussion. This was shown in [12,Theorem 4.6]. The case of ideals (k = 1) appears in [12,Theorem 3.6]. See also [9]. The theory developed in [12] is more general in that R can be any Noetherian K-algebra. In this paper we restrict to polynomial rings That case is treated in detail in [9].
We next argue that Theorems 2.2 and 4.1 are really two sides of the same coin. Every element A in the Weyl algebra D n acts as a differential operator with polynomial coefficients on functions in the unknowns x = (x 1 , . . . , x n ). Such a differential operator has a unique representation where all derivatives are moved to the right of the polynomial coefficients: There is a natural K-linear isomorphism between the Weyl algebra D n and the polynomial ring K[x, z] which takes the operator A in (27) to the following polynomial B in 2n variables: B(x, z) = r,s∈N n c r,s x r 1 1 · · · x rn n z s 1 1 · · · z sn n .
In Sections 1, 2 and 3, polynomials in R = K[x 1 , . . . , x n ] = K[∂ 1 , . . . , ∂ n ] act as differential operators on functions in the unknowns z = (z 1 , . . . , z n ). For such operators, polynomials in x are constants. By contrast, in the current section, we introduced the Weyl algebra D n . Its elements act on functions in x = (x 1 , . . . , x n ), with polynomials in z being constants. These two different actions of differential operators, by D n and R on scalar-valued functions, extend to actions by D k n and R k on vector-valued functions. We highlight the following key point: Our distinction between the z-variables and x-variables is absolutely essential.
The derivation of Theorem 4.4 rests on the following lemma on duality between x and z. Lemma 4.6. Let p and q be polynomials in n unknowns with coefficients in K. We have Proof. The parenthesized expression on the left equals p(∂ x ) • exp(x t z), while that on the right equals q(∂ z ) • exp(x t z). Therefore the expression in (30) is the result of applying the operator p(∂ x )q(∂ z ) = q(∂ z )p(∂ x ) to exp(x t z), when viewed as a function in 2n unknowns.
We now generalize this lemma to k ≥ 2, we replace p by a polynomial vector that depends on both x and z, and we rename that vector using the identification between (27) and (28).
Proof. If k = 1, we write A(x, ∂ x ) = α c α (x)∂ α x as in (27) and B(x, z) = α c α (x)z α as in (28). Only finitely many of the polynomials c α (x) are nonzero. Applying Lemma 4.6 gives The extension from k = 1 to k ≥ 2 follows because the differential operation • is K-linear.
We now take a step towards proving Proposition 4.8. The bijection between D k n and K[x, z] k , given by identifying the operator A in (27) with the polynomial B in (28), restricts to a bijection between the sets A and B.
vanishes for all x ∈ V (P ) and all polynomials f 1 , . . . , f l ∈ C[x]. Since the space of complexvalued polynomials is dense in the space of all entire functions on C n , the preceding implies = 0 for all z ∈ C n , x ∈ V (P ) and j = 1, . . . , l.
We conclude that the polynomial vector B(x, z) corresponding to A(x, ∂ x ) lies in B.
To prove the converse, we note that the implications above are reversible.
This uses the fact that linear combinations of the exponential functions x → exp(x t z), for z ∈ C n , are also dense in the space of entire functions.
Proof of Theorem 4.4. Let A be any finite subset of A which gives a differential primary decomposition of the P -primary module M. This exists and can be chosen to have cardinality equal to the length of M along P . Let B be the set of Noetherian multipliers (28) corresponding to the set A of Noetherian operators (27). Proposition 4.8 shows that the exponential function z → B(x, z) exp(x t z) is in Sol(M) whenever x ∈ V (P ) and B ∈ B. Hence all C-linear combinations of such functions are in Sol(M). More generally, by differentiating under the integral sign, we find that all functions of the following form are solutions of M: We need to argue that all solutions in F = C ∞ (Ω) admit such an integral representation. Suppose first that all associated primes of M are minimal. Then each A i spans a bimodule in the sense of [9, Theorem 3.2 (d)]. Hence, for each associated prime P i , the module is P i -primary, and M = M 1 ∩ · · · ∩ M s is a minimal primary decomposition. The operators in A i are in the relative Weyl algebra D n (S i ) and fully characterize the P i -primary component of M. We may thus follow the classical analytical constructions in the books [7,23,32] to patch together the integral representation of Sol(M i ) for i = 1, . . . , s, under the correspondence of Noetherian operators and Noetherian multipliers. Therefore, all solutions have the form (17).
Things are more delicate when M has embedded primes. Namely, if P i is embedded then the operators in A i only characterize the contribution of the P i -primary component relative to all other components contained in P i . We see this in Section 5. One argues by enlarging A i to vector space generators of the relevant bimodule. Then the previous patching argument applies. And, afterwards one shows that the added summand in the integral representation are redundant because they are covered by associated varieties V (P j ) containing V (P i ).

Software and Algorithm
In this section we present an algorithm for solving linear PDE with constant coefficients. It is based on the methods for ideals given in [8,10,11]. The case of modules appears in [9]. We note that the computation of Noetherian operators has a long history, going back to work in the 1990's by Ulrich Oberst [28,29,30,31], who developed a construction of Noetherian operators for primary modules. This was further developed by Damiano, Sabadini and Struppa [15] who presented the first Gröbner-based algorithm. It works for primary ideals under the restrictive assumption that the characteristic variety has a rational point after passing to a (algebraically non-closed) field of fractions. Their article also points to an implementation in CoCoA, but we were unable to access that code. Since these early approaches rely on the ideals or modules being primary, using them in practice requires first computing a primary decomposition. If there are embedded primes, the number of Noetherian operators output by these methods will not be minimal either.
We here present a new algorithm that is universally applicable, to all ideals and modules over a polynomial ring. There are no restrictions on the input and the output is minimal. The input is a submodule M of R k , where R = K[x 1 , . . . , x n ]. The output is a differential primary decomposition of size amult(M) as in Theorem 4.3. A first step is to find Ass(M) = {P 1 , . . . , P s }. For each associated prime P i , the elements A(x, ∂ x ) in the finite set A i ⊂ D n (S i ) are rewritten as polynomials B(x, z), using the identification of (27) with (28). Only the codim(P i ) many variables z i with x i ∈ S i appear in these Noetherian multipliers B.
We now describe our implementation for (18) in Macaulay2 [21]. The command is called solvePDE, as in [12,Section 5]. It is distributed with Macaulay2 starting from version 1.18 in the package NoetherianOperators [8]. The user begins by fixing a polynomial ring R = K[x 1 , . . . , x n ]. Here K is usually the rational numbers QQ. Fairly arbitrary variable names x i are allowed. The argument of solvePDE is an ideal in R or a submodule of R k . The output is a list of pairs P i , {B i1 , . . . , B i,m i } for i = 1, . . . , s, where P i is a prime ideal given by generators in R, and each B ij is a vector over a newly created polynomial ring K[x 1 , . . . , x n , z 1 , . . . , z n ]. The new variables z i are named internally by Macaulay2. The system writes dx i for z i . To be precise, each new variable is created from an old variable by prepending the character d. This notation can be confusing at first, but one gets used to it. The logic comes from the differential primary decompositions described in [12,Section 5].
Each B ij in the output of solvePDE encodes an exponential solution B ij (x, z) exp(x t z) to M. Here x are the old variables chosen by the user, and x denotes points in the irreducible variety V (P i ) ⊆ C n . The solution is a function in the new unknowns z = (dx 1 , . . . , dx n ). For instance, if n = 3 and the input is in the ring QQ[u, v, w] then the output lives in the ring QQ[u, v, w, du, dv, dw]. Each solution to the PDE is a function ψ(du, dv, dw) and these functions are parametrized by a variety V (P i ) in a 3-space whose coordinates are (u, v, w).
We now demonstrate how this works for two examples featured in the introduction. The reader is encouraged to run this code, and to check that the output is the solution (9).
The method in solvePDE is described in Algorithm 1 below. A key ingredient is a translation map. We now explain this in the simplest case, when the module is supported in one point. Suppose V (M) = {u} for some u ∈ K n . We set m u = x 1 − u 1 , . . . , x n − u n and The following two results are straightforward. We will later use them when M is any primary module, u is the generic point of V (M), and K = K(u) is the associated field extension of K. We note that all Noetherian operators over a K-rational point can be taken to have constant coefficients. This follows from Theorem 3.8. This observation reduces the computation of solutions for a primary module to finding the polynomial solutions of the translated module. Next, we bound the degrees of these polynomials.
Example 5.5. [n = k = r = 2] The following module is m 0 -primary of multiplicity three: Here c 1 , c 2 , c 3 , c 4 , c 5 are arbitrary constants in K. The matrix Diff(M) has three rows, one for each generator of M, and it has 12 columns, indexed by e 1 , z 1 e 1 , . . . , z 2 2 e 1 , e 2 , z 1 e 2 , . . . , z 2 2 e 2 . The space ker K (Diff(M)) is 3-dimensional. A basis furnishes the three polynomial solutions We now turn to Algorithm 1. The input and output are as described in (18). The method was introduced in [9, Algorithm 4.6] for computing differential primary decompositions. We use it for solving PDE. It is implemented in Macaulay2 under the command solvePDE. In our discussion, the line numbers refer to the corresponding lines of pseudocode in Algorithm 1.

Algorithm 1 SolvePDE
Input: An arbitrary submodule M of R k Output: List of associated primes with corresponding Noetherian multipliers. 1: for each associated prime ideal P of M do 2: r ← the smallest number such that V ∩ P r+1 R k is a subset of U

5:
S ← a maximal set of independent variables modulo P The remaining steps are repeated for each P ∈ Ass(M). For a fixed associated prime P , our goal is to identify the contribution to Sol(M) of the P -primary component of M.
Lines 2-3 To achieve this goal, we study solutions for two different R-submodules of R k .
The first one, denoted U, is the intersection of all P i -primary components of M, where P i are the associated primes contained in P . Thus U = MR k P ∩ R k , which is the extension-contraction module of M under localization at P . It is computed as U = (M : f ∞ ), where f ∈ R is contained in every associated prime P j not contained in P .
The second module, denoted V , is the intersection of all P i -primary components of M, where P i is strictly contained in P . Hence V = (U : P ∞ ) is the saturation of U at P . We have U = V ∩ Q, where Q is a P -primary component of M. Thus the difference between the solution spaces Sol(U) and Sol(V ) is caused by the primary module Q.
When P is a minimal prime, U is the unique P -primary component of M, and V = R k .

Line 4
The integer r bounds the degree of Noetherian multipliers associated to U but not V .
Namely, if the function φ(z) = B(x, z) exp(x t z) lies in Sol(U)\ Sol(V ) for all x ∈ V (P ), then the z-degree of the polynomial B(x, z) is at most r. This will lead to an ansatz for the Noetherian multipliers responsible for the difference between Sol(U) and Sol(V ). Writing u i for the image of x i in K = Frac(R/P ), the ring map γ is defined as follows: By abuse of notation, we denote by γ the extension of (37) to a map R k → T k .
Lines 9-11 Let m := y i : x i ∈ S be the irrelevant ideal of T . We define the T -submoduleŝ These modules are m-primary: their solutions are finite-dimensional K-vector spaces consisting of polynomials of degree ≤ r. The polynomials in Sol(Û )\ Sol(V ) capture the difference betweenÛ andV , and also the difference between U and V after lifting.

Lines 12-14
We construct matrices Diff(Û) and Diff(V ) with entries in K[z i : x i ∈ S]. As in (34), their kernels over K correspond to polynomial solutions ofÛ andV . The set N = {z α e j : |α| ≤ r, j = 1, . . . , k} is a K-basis for elements of degree ≤ r in K[z i : x i ∈ S] k . The y i -variables act on the z i variables as partial derivatives, i.e. y i = ∂ ∂z i . We define the matrix Diff(Û) as follows. LetÛ 1 , . . . ,Û ℓ be generators ofÛ . The rows of Diff(Û) are indexed by these generators, the columns are indexed by N, and the entries are the polynomialsÛ i • z α e j . In the same way we construct Diff(V ).
Lines 15-16 Let ker K (Diff(Û)) be the space of vectors in the kernel of Diff(Û) whose entries are in K. The K-vector space ker K (Diff(Û )) parametrizes the polynomial solutions k j=1 |α|≤r v α,j z α e j ∈ Sol(Û).
The same holds forV . The quotient space K := ker K (Diff(Û ))/ ker K (Diff(V )) characterizes excess solutions in Sol(Û ) relative to Sol(V ). Write A for a K-basis of K.

Lines 17-18
We interpret A as a set of Noetherian multipliers for M by performing a series of lifts and transformations. For each elementv ∈ A, we choose a representative v ∈ ker K (Diff(Û )). The entries of v are in K = Frac(R/P ), and may contain denominators.
Multiplying v by a common multiple of the denominators yields a vector with entries in R/P , indexed by N. We lift this to a vector u = (u α,j ) with entries in R. The Noetherian multiplier corresponding to u is the following vector in R[dx i : x i ∈ S] k : Applying the mapv → u to eachv ∈ A yields a set B of Noetherian multipliers. These multipliers describe the contribution of the P -primary component of M to Sol(M).
The output of Algorithm 1 is a list of pairs (P, B), where P ranges over Ass(M) and B = {B 1 , . . . , B m } is a subset of R[dx 1 , . . . , dx n ] k . The cardinality m is the multiplicity of M along P . The output describes the solutions to the PDE given by M. Consider the functions Then the space of solutions to M consists of all functions P ∈Ass(M ) φ P (dx 1 , . . . , dx n ).
A differential primary decomposition of M is obtained from this by reading dx i as ∂ x i . Indeed, the command differentialPrimaryDecomposition described in [9] is identical to our command solvePDE. All examples in [9, Section 6] can be interpreted as solving PDE.

Schemes and Coherent Sheaves
The concepts of schemes and coherent sheaves are central to modern algebraic geometry. These generalize varieties and vector bundles, and they encode geometric structures with multiplicities. The point is that the supports of coherent sheaves and other schemes are generally nonreduced. We here argue that our linear PDE offer a useful way to think about the geometry of these objects. That perspective motivated the writing of [27,Section 3.3].
The affine schemes we consider are defined by ideals I in a polynomial ring R. Likewise, submodules M of R k represent coherent sheaves on C n . We study the affine scheme Spec(R/I) and the coherent sheaf given by the module R k /M. The underlying geometric objects are the affine varieties V (I) and V (M) in C n . The latter was discussed in Section 3. The solution spaces Sol(I) or Sol(M) furnish nonreduced structures on these varieties, encoded in the integral representations due to Ehrenpreis-Palamodov. According to Section 4, these are dual to differential primary decompositions. Coherent sheaves were a classical tool in the analysis of linear PDE, but in the analytic category, where their role was largely theoretical. The Ehrenpreis-Palamodov Fundamental Principle appears in Hörmander's book under the header Coherent analytic sheaves on Stein manifolds [23, Chapter VII]. Likewise, Treves' exposition, in the title of [35, Section 3.2], calls for Analytic sheaves to the rescue. By contrast, sheaves in this paper are concrete and algebraic: they are modules in Macaulay2.
One purpose of this section is to explore how PDE and their solutions behave under degenerations. We consider ideals and modules whose generators depend on a parameter ǫ. This is modelled algebraically by working over the field K = C(ǫ) of rational functions in the variable ǫ. Algorithm 1 can be applied to the polynomial ring R = K[x 1 , . . . , x n ] over that field. We think of ǫ as a small quantity and we are interested in what happens when ǫ → 0.
Our discussion in this section is very informal. This is by design. We present a sequence of examples that illustrates the geometric ideas. The only formal result is Theorem 6.6, which concerns the role of the Quot scheme in parametrizing systems of linear PDE. Example 6.1 (n = 2). Consider the prime ideal I ǫ = ∂ 2 1 − ǫ 2 ∂ 2 . For nonzero parameters ǫ, by Theorem 2.2, the solutions to this PDE are represented as one-dimensional integrals α ǫ (z 1 , z 2 ) = exp(ǫ t z 1 + t 2 z 2 )dt ∈ Sol(I).
By taking the limit for ǫ → 0, this yields arbitrary functions a(z 2 ). These are among the solutions to I 0 = ∂ 2 1 . Other limit solutions are obtained via the reflection t → −t. Set Note the similarity to the one-dimensional wave equation (5) with c = ǫ. The solution for ǫ = 0 is given in (6). This is found from the integrals above by taking the following limit: We conclude that the general solution to I 0 equals φ(z 1 , z 2 ) = a(z 2 ) + z 1 b(z 2 ), where b is any function in one variable. The calculus limit in (38) realizes a scheme-theoretic limit in the sense of algebraic geometry. Namely, two lines in (7) converge to a double line in C 2 .
Example 6.2 (n = 3). For ǫ = 0 consider the curve t → (ǫt 3 , t 4 , ǫ 2 t 2 ) in C 3 . Its prime ideal equals The solution space Sol(I ǫ ) consists of the functions What happens to these functions when ǫ tends to zero? We address this question algebraically. The scheme-theoretic limit of the given ideal I ǫ is the ideal in Example 2.3. This is verified by a Gröbner basis computation (cf. [18,Section 15.8]). Passing from ideals to their varieties, we see a toric curve in C 3 that degenerates to a line with multiplicity four. We claim that the formula in (20) arises from (39), just as in Example 6.1. Namely, set i = √ −1 and let φ s ∈ Sol(I ǫ ) be the function that is obtained from φ in (39) by replacing the parameter t with i s t. Then the following four functions on the left are solutions to I ǫ : The functions obtained as limits on the right are precisely the four summands seen in (20). Thus, the solution spaces to this family of PDE reflect the degeneration of the toric curve.
Such limits make sense also for modules. If a module M ǫ ⊆ R k depends on a parameter ǫ then we study its solution space Sol(M ǫ ) as ǫ tends to zero. Geometrically, we examine flat families of coherent sheaves on C n or on P n−1 . A typical scenario comes from the action of the torus (C * ) n , where Gröbner degenerations arise as limits under one-parameter subgroups. The limit objects are monomial ideals (for k = 1) or torus-fixed submodules (for k ≥ 2). The next example illustrates their rich structure with an explicit family of torus-fixed submodules. Example 6.3 (n = 2, k = 3, l = 6). Given a 3 × 6 matrix A with random real entries, we set Then M is torus-fixed and m-primary, where m = ∂ 1 , ∂ 2 , and amult(M) = 10. A basis of Sol(M) is given by ten polynomial solutions, namely the standard basis vectors e 1 , e 2 , e 3 , four vectors that are multiples of z 1 , z 1 , z 2 , z 2 , and three vectors that are multiples z 2 1 , z 1 z 2 , z 2 2 . The reader is invited to verify this with Macaulay2. Here is the input for one concrete instance: By varying the matrix A, and by extracting the vector multipliers of 1, z 1 and z 2 1 , we obtain any complete flag of subspaces in C 3 . The vector multipliers of 1, z 2 , and z 2 2 give us another complete flag of subspaces in C 3 , and the multiplier of z 1 z 2 gives us the intersection line of the planes corresponding to the multipliers of z 1 and z 2 . This is illustrated in Figure 1. Thus flag varieties, with possible additional structure, appear naturally in such families. dim coefficient Figure 1: The coefficient vectors of the solutions to the PDE in Example 6.3 correspond to the above linear spaces with the given inclusions. We obtain two complete flags in C 3 , along with one interaction between the two. Experts on quiver representations will take note.
The degenerations of ideals and modules we saw point us to Hilbert schemes and Quot schemes. Let us now also take a fresh look at Example 5.5. The modules M in that example form a flat family over the affine space C 5 with coordinates c = (c 1 , c 2 , c 3 , c 4 , c 5 ). For c = 0 we obtain the PDE whose solution space equals C{e 1 , z 1 e 1 , z 2 1 e 1 }. But, what happens when one of the coordinates of c tends to infinity? That limit exists in the Quot scheme.
In our context, Hilbert schemes and Quot schemes serve as parameter spaces for primary ideals and primary modules. This was shown for ideals in [11] and for modules in [9]. In what follows we shall discuss the latter case. Fix a prime ideal P of codimension c in R = K[x 1 , . . . , x n ]. Write K for the field of fractions of the integral domain R/P , as in Line 6 of Algorithm 1. We write u 1 , . . . , u n for the images in K of the variables x 1 , . . . , x n in R. After possibly permuting these variables, we shall assume that P ∩ K[x c+1 , . . . , x n ] = {0}. The set {u c+1 , . . . , u n } is algebraically independent over K, so it serves as S in Line 5.
Consider the formal power series ring S = K[[y 1 , . . . , y c ]] where y 1 , . . . , y c are new variables. This is a local ring with maximal ideal m = y 1 , . . . , y c . We are interested in mprimary submodules L of S k . The quotient module S k /L is finite-dimensional as a K-vector space, and we write ν = dim K (S k /L) for its dimension. The punctual Quot scheme is a parameter space whose points are precisely those modules. We denote the Quot scheme by Quot ν (S k ) = L ⊂ S k : L submodule with Ass(L) = m and dim K (S k /L) = ν . (40) This is a quasi-projective scheme over K, i.e. it can be defined by a finite system of polynomial equations and inequations in a large but finite set of variables. Each solution to that system is one submodule L. This construction goes back to Grothendieck, and it plays a fundamental role in parametrizing coherent sheaves in algebraic geometry. While a constructive approach to Quot schemes exists, thanks to Skjelnes [34], the problem remains to write defining equations for Quot ν (S k ) in a computer-readable format, for small values of c, k, ν. A natural place to start would be the case c = 2, given that coherent sheaves supported at a smooth point on a surface are of considerable interest in geometry and physics [1,3,20,22]. The next two examples offer a concrete illustration of the concept of Quot schemes. We exhibit the Quot schemes that parametrize two families of linear PDE we encountered before. where K is any field. Replacing ∂ 1 , ∂ 2 with y 1 , y 2 in Example 6.3, every 3 × 6 matrix A over K defines a submodule L of S 3 . The quotient S 3 /L is a 10-dimensional K-vector space, so L corresponds to a point in the Quot scheme Quot 10 (S 3 ). By varying A, we obtain a closed subscheme of Quot 10 (S 3 ), which contains the complete flag variety we saw in Example 6.3. For k = 1, the Quot scheme is the punctual Hilbert scheme Hilb ν (S); see [6]. The points on this Hilbert scheme represent m-primary ideals of length ν in S = K[[y 1 , . . . , y c ]]. It was shown in [11, Theorem 2.1] that Hilb ν (S) parametrizes the set of all P -primary ideals in R of multiplicity ν. This means that we can encode P -primary ideals in R by m-primary ideals in S, thus reducing scheme structures on any higher-dimensional variety to a scheme structure on a single point. This was generalized from ideals to submodules (k ≥ 2) by Chen and Cid-Ruiz [9]. Geometrically, we encode coherent sheaves by those supported at one point, namely the generic point of V (P ), corresponding to the field extension K/K. Here is the main result from [9], stated for the polynomial ring R, analogously to [11, Theorem 2.1].
Theorem 6.6. The following four sets of objects are in a natural bijective correspondence: Moreover, any basis of the K-subspace (d) can be lifted to a finite subset A of D k n,c such that Here D n,c is the subalgebra of the Weyl algebra D n consisting of all operators (27) with s c+1 = · · · = s n = 0. This is a special case of (26). Elements in D n,c are differential operators in ∂ x 1 , . . . , ∂ xc whose coefficients are polynomials in x 1 , . . . , x n . Note that D n,0 = R and D n,n = D n . Equation (41) says that A is a differential primary decomposition for the primary module M. The Noetherian operators in A characterize membership in M. In this paper, however, we focus on the K-linear subspaces in item (c). By clearing denominators, we can represent such a subspace by a basis B of elements in K[x 1 , . . . , x n ][z 1 , . . . , z c ]. These are precisely the Noetherian multipliers needed for the integral representation of Sol(M). In summary, Theorem 6.6 may be understood as a theoretical counterpart to Algorithm 1. The following example elucidates the important role played by the Quot scheme in our algorithm.  (2)]. This step is the analogue for modules of the elimination that creates a large P -primary ideal Q from [11, equation (5)]. Geometrically speaking, the 10-dimensional space of polynomial vectors that are solutions to the PDE in Example 6.3 encodes a coherent sheaf of rank 3 on the singular surface V (P ).
The ground field K in Section 5 need not be algebraically closed. In particular, we usually take K = Q when computing in Macaulay2. But this requires some adjustments in our results. For instance, Theorem 3.8 does not apply when the coordinates of u ∈ C n are not in K. In such situations, we may take K to be an algebraic extension of K. We close with an example that shows the effect of the choice of ground field in a concrete computation. The output shows that amult(M) = 5 when K = Q, but ν = amult(M) = 6 when K = C: Applying now the command solvePDE(M), we find the differential primary decomposition The module M has three associated primes over K = Q. The first gives three polynomial solutions, including −z 1 z 2 . The second prime contributes −1 1 exp(z 1 +z 2 ), and the third gives is the field extension of K = Q defined by the third associated prime.

What Next?
The results presented in this article suggest many directions for future study and research.

Special Ideals and Modules
One immediate next step is to explore the PDE corresponding to various specific ideals and modules that have appeared in the literature in commutative algebra and algebraic geometry. One interesting example is the class of ideals studied recently by Conca and Tsakiris in [14], namely products of linear ideals. A minimal primary decomposition for such an ideal I is given in [14,Theorem 3.2]. It would be gratifying to find the arithmetic multiplicity amult(I) and the solution spaces Sol(I) in terms of matroidal data for the subspaces in V (I).
A more challenging problem is to compute the solution space Sol(J) when J is an ideal generated by n power sums in R = Q[∂ 1 , . . . , ∂ n ]. This problem is nontrivial even for n = 3.
To be more precise, we fix relatively prime integers 0 < a < b < c, and we consider the ideal If (a, b, c) = (1, 2, 3) then V (J 1,2,3 ) = {0} and Sol(J 1,2,3 ) is a six-dimensional space of polynomials, spanned by the discriminant (z 1 − z 2 )(z 1 − z 3 )(z 2 − z 3 ) and its successive derivatives. In general, it is conjectured in [13] that V (J a,b,c ) = {0} if and only if abc is a multiple of 6. If this holds then Sol(J a,b,c ) consists of polynomials. If this does not hold then we must compute V (J a,b,c ) and extract the Noetherian multipliers from all associated primes of J a,b,c . For instance, for (a, b, c) = (2, 5, 8) with K = Q, the arithmetic multiplicity is 38, one associated prime is ∂ 1 + ∂ 2 + ∂ 3 , ∂ 2 2 + ∂ 2 ∂ 3 + ∂ 2 3 , and the largest degree of a polynomial solution is 10. It will be worthwhile to explore the solution spaces Sol(M) for modules M with special combinatorial structure. One natural place to start are syzygy modules. For instance, take ∂z j e j , where α = α(z 1 , z 2 , z 3 , z 4 ) ranges over all functions in F . Toric geometry [27,Chapter 8] furnishes modules whose PDE should be interesting. The initial ideals of a toric ideal with respect to weight vectors are binomial ideals, so the theory of binomial primary decomposition applies, and it gives regular polyhedral subdivisions as in [27,Theorem 13.28]. Non-monomial initial ideals should be studied from the differential point of view. Passing to coherent sheaves, we may examine the modules representing toric vector bundles and their Gröbner degenerations. In particular, the cotangent bundle of an embedded toric variety, in affine or projective space, is likely to encompass intriguing PDE.

Linear PDE with Polynomial Coefficients
We discuss an application to PDE with non-constant coefficients, here taken to be polynomials. Our setting is the Weyl algebra D = C z 1 , . . . , z n , ∂ 1 , . . . , ∂ n . A linear system of PDE with polynomial coefficients is a D-module. For instance, consider a D-ideal I, that is, a left ideal in the Weyl algebra D. The solution space of I is typically infinite-dimensional.
We construct solutions to I with the method of Gröbner deformations [33, Chapter 2]. Let w ∈ R n be a general weight vector, and consider the initial D-ideal in (−w,w) (I). This is also a D-ideal, and it plays the role of Gröbner bases in solving polynomial equations. We know from [33, Theorem 2.3.3] that in (−w,w) (I) is fixed under the natural action of the n-dimensional algebraic torus (C * ) n on the Weyl algebra D. This action is given in [33, equation (2.14)]. It gives rise to a Lie algebra action generated by the n Euler operators θ i = z i ∂ i for i = 1, 2, . . . , n.
These Euler operators commute pairwise, and they generate a (commutative) polynomial subring C[θ] = C[θ 1 , . . . , θ n ] of the Weyl algebra D. If J is any torus-fixed D-ideal then it is generated by operators of the form x a p(θ)∂ b where a, b ∈ N n . We define the falling factorial The distraction J is the ideal in C[θ] generated by all polynomials where x a p(θ)∂ b runs over a generating set of J. The space of classical solutions to J is equal to that of J. This was exploited in [33,Theorem 2.3.11] under the assumption that J is holonomic, which means that J is zero-dimensional in C[θ]. We here drop that assumption.
Given any D-ideal I, we compute its initial D-ideal J = in (−w,w) (I) for w ∈ R n generic. Solutions to I degenerate to solutions of J under the Gröbner degeneration given by w. We can often reverse that construction: given solutions to J, we lift them to solutions of I. Now, to construct all solutions of J we study the Frobenius ideal F = J. This is an ideal in C[θ].
We now describe all solutions to a given ideal F in C[θ]. This was done in [33, Theorem 2.3.11] for zero-dimensional F . Ehrenpreis-Palamodov allows us to solve the general case. Here is our algorithm. We replace each operator θ i = z i ∂ i by the corresponding ∂ i . We then apply solvePDE to get the general solution to the linear PDE with constant coefficients. In that general solution, we now replace each coordinate z i by its logarithm log(z i ). In particular, each occurrence of exp(u 1 z 1 + · · · + u n z n ) is replaced by a formal monomial z u 1 1 · · · z un n . The resulting expression represents the general solution to the Frobenius ideal F . Example 7.1. As a warm-up, we note that a function in one variable z 2 is annihilated by the squared Euler operator θ 2 2 = z 2 ∂ 2 z 2 ∂ 2 if and only if it is a C-linear combination of 1 and log(z 2 ). Consider the Frobenius ideal given by Palamodov's system [11,Example 11]: F = θ 2 2 , θ 2 3 , θ 2 − θ 1 θ 3 . To find all solutions to F , we consider the corresponding ideal ∂ 2 2 , ∂ 2 3 , ∂ 2 − ∂ 1 ∂ 3 in C[∂ 1 , ∂ 2 , ∂ 3 ]. By solvePDE, the general solution to that constant coefficient system equals α(z 1 ) + z 2 · β ′ (z 1 ) + z 3 · β(z 1 ), where α and β are functions in one variable. We now replace z i by log(z i ) and we abbreviate A(z 1 ) = α(log(z 1 )) and B(z 1 ) = β(log(z 1 )). Thus A and B are again arbitrary functions in one variable. We conclude that the general solution to the given Frobenius ideal F equals φ(z 1 , z 2 , z 3 ) = A(z 1 ) + z 1 · log(z 2 ) · B ′ (z 1 ) + log(z 3 ) · B(z 1 ).
This method can also be applied for k ≥ 2, enabling us to study solutions for any D-module.

Socle Solutions
The solution space Sol(M) to a system M of linear PDE is a complex vector space, typically infinite-dimensional. Our algorithm in Section 5 decomposes that space into finitely many natural pieces, one for each of the integrals in (17). The number amult(M) of pieces is a meaningful invariant from commutative algebra. Each piece is labeled by a polynomial B ij (x, z) in 2n variables, and it is parametrized by measures µ ij on the irreducible variety V i . This approach does not take full advantage of the fact that Sol(M) is an R-module where R = C[∂ 1 , . . . , ∂ n ]. Indeed, if ψ(z) is any solution to M then so is (∂ i • ψ)(z). So, if we list all solutions then ∂ i • ψ is redundant provided ψ is already listed. More precisely, we consider Sol(M)/ ∂ 1 , . . . , ∂ n Sol(M).
This quotient space is still infinite-dimensional over C, but it often has a much smaller description than Sol(M). A solution to M is called a socle solution if it is nonzero in (43). We pose the problem of modifying solvePDE so that the output is a minimal subset of Noetherian multipliers which represent all the socle solutions. The solution will require the prior development of additional theory in commutative algebra, along the lines of [9,11,12]. The situation is straightforward in the case of Theorem 3.8 when the support V (M) is finite. Here the space Sol(M) is finite-dimensional, and it is canonically isomorphic to the vector space dual of R k /M, as shown in [30]. Finding the socle solutions is a computation using linear algebra over K = C, similar to the three steps after Proposition 5.4. For instance, let k = 1 and suppose that I is a homogeneous ideal in R. The socle solutions are sometimes called volume polynomials [33, Lemma 3.6.20]. The most desirable case arises when I is Gorenstein. Here the socle solution is unique up to scaling, and it fully characterizes I. For instance, consider the power sum ideal n i=1 ∂ s i : s = 1, . . . , n . This is Gorenstein with volume polynomial ∆ = 1≤i<j≤n (z i − z j ). For n = 3, the ideal I is J 1,2,3 in Subsection 7.1. Here Sol(I) is a C-vector space of dimension n!. However, as an R-module, it is generated by a single polynomial ∆. A future version of solvePDE should simply output Sol(I) = R∆.
It is instructive to revisit the general solutions to PDE we presented in this paper, and to highlight the socle solutions for each of them. For instance, in Example 2.3 we have amult(I) = 4 but only one of the four Noetherian multipliers B i gives a socle solution. The last summand in (20) gives the socle solutions. The first three summands can be obtained from the last summand by taking derivatives. What are the socle solutions in Example 2.4?

From Calculus To Analysis
The storyline of this paper is meant to be accessible for students of multivariable calculus. These students know how to check that (9) is a solution to (8). The derivations in Examples 2.3, 2.4, 5.1, 5.2, 6.1, 6.3 and 6.8 are understandable as well. No prior exposure to abstract algebra is needed to follow these examples, or to download Macaulay2 and run solvePDE.
The objective of this subsection is to move beyond calculus, and to build a bridge to advanced themes and current research in analysis. First of all, we ought to consider inhomogeneous systems of linear PDE with constant coefficients. Such a system has the form