LEFT AND RIGHT EIGENVECTORS OF A VARIANT OF THE SYLVESTER–KAC MATRIX

Abstract As an extension of Sylvester’s matrix, a tridiagonal matrix is investigated by determining both left and right eigenvectors. Orthogonality relations between left and right eigenvectors are derived. Two determinants of the matrices constructed by the left and right eigenvectors are evaluated in closed form.


Introduction and motivation
Tridiagonal matrices are an important class of matrices in mathematics and physics (see [1,8,13,17,18,21]).One particular case whose determinant evaluation was conjectured (without proof) by Sylvester [19, page 305] where the matrix entries are given by For this elegant result, there exist a number of generalisations and applications (see, for example, [2, 4, 9-11, 14-16, 20]).However, eigenvectors have only been found for a few related tridiagonal matrices (see [3,6,7,12]). [2] The Sylvester-Kac matrix 317 The first proof of Sylvester's determinant formula is attributed by Muir [17,page 442] to Francesco Mazza in 1866.However, Kac [14] was perhaps the first to give a complete proof of the formula claimed by Sylvester and provided a polynomial characterisation of the eigenvectors through the generating function approach.Therefore, the matrix [τ i,j ] is also called the Sylvester-Kac matrix.
The first author [5] examined the following extended matrix (here, y has been replaced by 2y to avoid rational expressions for the eigenvalues).For two free variables x, y, we define the tridiagonal matrix of order m + 1 by where For instance, Ω 5 (x, y) is illustrated as follows (where zeros are omitted).
In mathematics, physics and applied sciences, it is important to determine, not only determinants and eigenvalues, but also eigenvectors for certain classes of matrices.The aim of the present paper is to determine the left and right eigenvectors of Ω m (x, y).Our findings may potentially serve as testing samples (in the sense of [8]) to assess the numerical accuracy of algorithms for computations on similar matrices.
The paper is organised as follows.In the next section, the eigenvectors of Ω m (x, y) are determined explicitly by following the same approach as in [6].In Section 3, we prove orthogonality relations between the left and right eigenvectors.Finally in Section 4, we evaluate, in closed form, the two determinants constructed, respectively, by the left eigenvectors and the right ones.

Left and right eigenvectors
For the sake of brevity, define the algebraic function ρ by Then the eigenvectors of Ω m (x, y) are determined by the following theorem.THEOREM 2.1.Let λ k be an eigenvalue of Ω m (x, y) with −n ≤ k ≤ n + δ.Then the following two statements hold.
(a) The vector u k = (u k (0), u k (1), . . ., u k (m)) is a left eigenvector corresponding to the eigenvalue λ k , where u k (j) is defined by the binomial sum PROOF.We begin by showing that u k is the left eigenvector corresponding to the eigenvalue λ k .It suffices to prove that, for each pair (k, j), which can be, equivalently, restated as Observing the functional relations we can manipulate the expression which leads to the useful relation According to the definition, the left-hand side of (2.1) can be written as Consequently, the equality (2.1) can be restated as 3) The two sums on the left can be combined as while, similarly, for the two sums on the right, Shifting forward the summation index → 1 + for the last sum, we see that the above two expressions for both sides of equation (2.3) coincide.This confirms item (a).Likewise, (b) will be confirmed if we can show that v k is the right eigenvector corresponding to λ k .This can be done by showing, for each pair (k, i), that which can be, equivalently, restated as (2.4) Keeping in mind (2.2), we can express the left-hand side of (2.4) as where Therefore, (2.4) can be reformulated as According to the definitions, we first simplify the left-hand side of this equation, and then the right-hand side of the same equation, The last two expressions become identical when the replacement → 1 + is made in the latter one.This shows that v k is indeed the right eigenvector of the matrix M δ+2n (x, y) corresponding to λ k .

Orthogonality relations
For two vectors u and v of the same dimension, denote their usual scalar product by u, v .The next theorem highlights orthogonality relations between the left and right eigenvectors of Ω m (x, y).THEOREM 3.1.Assume u i and v j as in Theorem 2.1.Then the following orthogonality relations hold for all i and j subject to −n ≤ i, j ≤ n + δ.
PROOF.To prove these orthogonality relations, write the scalar product as the triple sum Observing that immediately, we have u i , v j = 0 when i j for λ i λ j .
When i = j = , the scalar product (3.1)can be reformulated, by first making the replacement j → k − j and then interchanging the order of the triple sum, as The inner sum with respect to k can be reformulated under k → κ + j and then evaluated in closed form as δ+2n− j κ=0 since the last sum results substantially, apart from an alternating sign, in the finite differences of order n + for a polynomial of the same degree n + .Therefore, the triple sum is reduced to a double one and can be evaluated further by the binomial theorem: that is, This is equivalent to the expression

Two determinantal evaluations
Finally, we prove the two determinantal identities as in the following theorem.
THEOREM 4.1.Letting u i and v j be as in Theorem 2.1, define the matrices Their determinants are evaluated in closed form as PROOF.We first evaluate the determinant det U m .By making use of (2.2), we rewrite the three-term relation (2.1) as For the determinant of the matrix perform the following three operations: • add j/(m − j) times the (j − 1)th row to the (j + 1)th row; • add (2j − m)(ρ − ρ −1 )/(2m − 2j) times the jth row to the (j + 1)th row; and • after the above two operations, the entry at position (j + 1, k) in the (j + 1)th row becomes ((2k − m)(ρ + ρ −1 )/(2m − 2j))u k−n (j), in view of (4.1).
Repeating this operation upwards for all the rows except the first one and then pulling out the common factors in rows, we get the expression (with k indicating the kth column) det By carrying out the same operations in the above matrix (except for the first two rows), we can further reduce the determinant: that is, Multiplying these together, we find the closed formula Finally, to evaluate the other determinant det V m , rewrite, analogously, the three-term relation (2.4) as Similarly, for the determinant make the following three operations: • add (m − i + 1)/(i + 1) times the (i − 1)th row to the (i + 1)th row; • add (2i − m)(ρ − ρ −1 )/(2i + 2) times the ith row to the (i + 1)th row; and • after the above two operations, the entry at position (i Repeating this operation upwards for all the rows except the first one and then pulling out the common factors in rows, we get the expression (with k indicating the kth column) det By carrying out the same operations in the above matrix (except for the first two rows), we can reduce this further: that is, Iterating the same procedure m times and extracting the common factors v k (0) in the columns of the resulting matrix, we find the following simplified expression.

.
Iterating the same procedure m times and extracting the common factors u k−n (0) in the columns of the resulting matrix, we derive the simplified expression det U n = n (0).Keeping in mind that u k−n (0) = m k and the determinant in the middle is of Vandermonde type, we can evaluate separately