Uneingeschränkter Zugang

Regularizing algorithm for mixed matrix pencils


Zitieren

Introduction

Van Dooren [7] gave an algorithm that for each pair (A, B) of complex matrices of the same size constructs its regularizing decomposition; that is, it constructs a matrix pair that is simultaneously equivalent to (A, B) and has the form

(A1,B1) ⊕ … ⊕ (At,Bt) ⊕ (A,B)

in which (A,B) is a pair of nonsingular matrices and each other summand has one of the forms:

(Fn),Gn),(FnT,GnT),(In,Jn(0)),(Jn(0),In),$$\begin{array}{} \displaystyle ({F_n},{G_n}),\quad (F_n^T,G_n^T),\quad ({I_n},{J_n}(0)),\quad ({J_n}(0),{I_n}), \end{array}$$

where Jn(0) is the singular Jordan block and

Fn:=[001001],  Gn:=[100100]$$\begin{array}{} \displaystyle {F_n}: = \left[ {\begin{array}{*{20}{c}} 0&{}&0\\ 1& \ddots &{}\\ {}& \ddots &0\\ 0&{}&1 \end{array}} \right],\quad \quad {G_n}: = \left[ {\begin{array}{*{20}{c}} 1&{}&0\\ 0& \ddots &{}\\ {}& \ddots &1\\ 0&{}&0 \end{array}} \right] \end{array}$$ are n × (n − 1) matrices; n ≥ 1. Note that (F1,G1) = (010,010); we denote by 0mn the zero matrix of size m × n, where m,n ∈ {0,1,2,...}. The algorithm uses only unitary transformations, which improves its computational stability.

We extend Van Dooren’s algorithm to square complex matrices up to consimilarity transformations ASAS¯1$\begin{array}{} \displaystyle A \mapsto SA{\bar S^{ - 1}} \end{array}$ and to pairs of m × n matrices up to transformations (A,B)(SAR,SBR¯)$\begin{array}{} \displaystyle (A,B) \mapsto (SAR,SB\bar R) \end{array}$, in which S and R are nonsingular matrices.

A regularizing algorithm for matrices of undirected cycles of linear mappings was constructed by Sergeichuk [6] and, independently, by Varga [8]. A regularizing algorithm for matrices under congruence was constructed by Horn and Sergeichuk [5].

All matrices that we consider are complex matrices.

Regularizing unitary algorithm for matrices under consimilarity

Two matrices A and B are consimilar if there exists a nonsingular matrix S such that SAS¯1=B$\begin{array}{} \displaystyle SA{\bar S^{ - 1}} = B \end{array}$. Two matrices are consimilar if and only if they represent the same semilinear operator, but in different bases. Recall that a mapping 𝒜 : U → V between complex vector spaces is semilinear if

A(au1+bu2)=a¯Au1+b¯Au2$$\begin{array}{} \displaystyle {\cal A}(a{u_1} + b{u_2}) = \bar a{\cal A}{u_1} + \bar b{\cal A}{u_2} \end{array}$$

for all a,b ∈ ℂ and u1,u2U.

The canonical form of a matrix under consimilarity is the following (see [3] or [4]):

Each square complex matrix is consimilar to a direct sum, uniquely determined up to permutation of direct summands, of matrices of the following types:

a Jordan block Jk(λ) with λ ≥ 0, and

[01μ0]$\begin{array}{} \displaystyle \left[ {\begin{array}{*{20}{c}} 0&1\\ \mu &0 \end{array}} \right] \end{array}$with μ ∉ ℝ or μ < 0.

Thus, each square matrix A is consimilar to a direct sum

Jn1(0)Jnk(0)A_,$$\begin{array}{} J_{n_1}(0)\oplus\dots\oplus J_{n_k}(0)\oplus \underline A, \end{array}$$

in which A is nonsingular and is determined up to consimilarity; the other summands are uniquely determined up to permutation. This sum is called a regularizing decomposition of A. The following algorithm admits to construct a regularizing decomposition using only unitary transformations.

Algorithm 1

Let A be a singular n × n matrix. By unitary transformations of rows, we reduce it to the form

S1A=0r1nA,S1isunitary,$$\begin{array}{} \displaystyle {S_1}A = \left[ {\begin{array}{*{20}{c}} {{0_{{r_1}n}}}\\ {A'} \end{array}} \right],\qquad {S_1}\; is\; unitary, \end{array}$$

in which the rows of A′ are linearly independent. Then we make the coninverse transformations of columns and obtain

S1AS1¯1=[0r10A1]$$\begin{array}{} \displaystyle {S_1}A{\overline {{S_1}} ^{ - 1}} = \left[ {\begin{array}{*{20}{c}} {{0_{{r_1}}}}&0\\ \star &{{A_1}} \end{array}} \right] \end{array}$$

We apply the same procedure to A1and obtain

S2A1S2¯1=[0r20A2],  S2isunitary$$\begin{array}{} \displaystyle {S_2}{A_1}{\overline {{S_2}} ^{ - 1}} = \left[ {\begin{array}{*{20}{c}} {{0_{{r_2}}}}&0\\ \star &{{A_2}} \end{array}} \right],\qquad {S_2}\;is\; unitary, \end{array}$$

in which the rows of [*A2] are linearly independent.

We repeat this procedure until we obtain

StAt1St¯1=0rt0At,Stisunitary$$\begin{array}{}{S}_tA_{t-1}\bar{S_t}^{-1}= \begin{bmatrix} 0_{r_t} & 0 \\ \star & A_t \\ \end{bmatrix}, \qquad S_t \;is\; unitary, \end{array}$$

in which At is nonsingular. The result of the algorithm is the sequence r1,r2,...,rt,At.

For a matrix A and a nonnegative integer n, we write

A(n):={000, ifn=0;AA(n summands), ifn1.$$\begin{array}{} \displaystyle {A^{(n)}}: = \{ \begin{array}{*{20}{c}} {{0_{00}},} \hfill & {{\rm{if }}n = 0;} \hfill \\ {A \oplus \ldots \oplus A\,(n\; {\rm{summands}}),} \hfill & {{\rm{if }}n \ge 1.} \hfill \\ \end{array} \end{array}$$

Theorem 1

Let r1,r2,...,rt,At be the sequence obtained by applying Algorithm 1 to a square complex matrix A. Then

r1r2 ≥ ... ≥ rt

and A is consimilar to

J1(r1r2)J2(r2r3)Jt1(rt1rt)Jt(rt)At$$\begin{array}{} \displaystyle {J_1}^{({r_1} - {r_2})} \oplus {J_2}^{({r_2} - {r_3})} \oplus \cdots \oplus J_{t - 1}^{({r_{t - 1}} - {r_t})} \oplus J_t^{({r_t})} \oplus {A_t} \end{array}$$

in which Jk := Jk(0) and At is determined by A up to consimilarity and the other summands are uniquely determined.

Proof. Let 𝒜 : VV be a semilinear operator whose matrix in some basis is A. Let W := 𝒜 V be the image of 𝒜. Then the matrix of the restriction 𝒜1 : WW of 𝒜 on W is A1. Applying Algorithm 1 to A1, we get the sequence r2,...,rt,At. Reasoning by induction on the length t of the algorithm, we suppose that r2r3 ≥ ··· ≥ rt and that A1 is consimilar to

J1(r2r3)Jt2(rt1rt)Jt1(rt)At.$$\begin{array}{} \displaystyle {J_1}^{({r_2} - {r_3})} \oplus \cdots \oplus J_{t - 2}^{({r_{t - 1}} - {r_t})} \oplus J_{t - 1}^{({r_t})} \oplus {A_t}. \end{array}$$

Thus, 𝒜1 : WW is given by the matrix (2) in some basis of W.

The direct sum (2) defines the decomposition of W into the direct sum of invariant subspaces

W=(W21W2,r2r3)(Wt1Wtrt)W.$$\begin{array}{} \displaystyle W = ({W_{21}} \oplus \ldots \oplus {W_{2,{r_2} - {r_3}}}) \oplus \cdots \oplus ({W_{t1}} \oplus \ldots \oplus {W_{t{r_t}}}) \oplus W'. \end{array}$$

Each Wpq is generated by some basis vectors epq2, epq3,...,epqp such that

A:epq2epq3epqp0.$$\begin{array}{} \displaystyle {\cal A}:\;{e_{pq2}} \mapsto {e_{pq3}} \mapsto \cdots \mapsto {e_{pqp}} \mapsto 0. \end{array}$$

For each Wpq, we choose epq1V such that 𝒜 epq1 = epq2. The set

{epqp|2pt,1qrprp+1}(rt+1:=0)$$\begin{array}{} \displaystyle \{ {e_{pqp}}{\mkern 1mu} |{\mkern 1mu} 2 \le p \le t,\;1 \le q \le {r_p} - {r_{p + 1}}\} \quad ({r_{t + 1}}: = 0) \end{array}$$

consists of r2 basis vectors belonging to the kernel of 𝒜; we supplement this set to a basis of the kernel of 𝒜 by some vectors e111,...,e1,r1−r2,1.

The set of vectors epqs supplemented by the vectors of some basis of W′ is a basis of V . The matrix of 𝒜 in this basis has the form (1) because

A:epq1epq2epq3epqp0$$\begin{array}{} \displaystyle {\cal A}:\;{e_{pq1}} \mapsto {e_{pq2}} \mapsto {e_{pq3}} \mapsto \cdots \mapsto {e_{pqp}} \mapsto 0 \end{array}$$

for all p = 1,...,t and q = 1,...,rprp+1. This completes the proof of Theorem 1.

Example 1

Let a square matrix A define a semilinear operator 𝒜 : VV and let the singular part of its regularizing decomposition be J2J3J4. This means that V possesses a set of linear independent vectors forming the Jordan chains

A:e1e2e3e40f1f2f30g1g20$$\begin{array}{} \displaystyle \begin{array}{*{20}{c}} {A:} \hfill & {{e_1} \mapsto {e_2} \mapsto {e_3} \mapsto {e_4} \mapsto 0} \hfill & {} \hfill \\ {} \hfill & {{f_1} \mapsto {f_2} \mapsto {f_3} \mapsto 0} \hfill & {} \hfill \\ {} \hfill & {{g_1} \mapsto {g_2} \mapsto 0} \hfill & {} \hfill \\ \end{array} \end{array}$$

Applying the first step of Algorithm 1, we get A1whose singular part corresponds to the chains

A:e2e3e40f3f30g20$$\begin{array}{} \displaystyle \begin{array}{*{20}{c}} {A:} \hfill & {{e_2} \mapsto {e_3} \mapsto {e_4} \mapsto 0} \hfill & {} \hfill \\ {} \hfill & {{f_3} \mapsto {f_3} \mapsto 0} \hfill & {} \hfill \\ {} \hfill & {{g_2} \mapsto 0} \hfill & {} \hfill \\ \end{array} \end{array}$$

On the second step, we delete e2, f2,g2and so on. Thus, ri is the number of vectors in the ith column of (3): r1 = 3, r2 = 3, r3 = 2,r4 = 1. We get the singular part of regularizing decomposition of A:

J1(r1r2)Jt1(rt1rt)Jt(rt)=J1(33)J2(32)J3(21)J4(1)=J2J3J4.$$\begin{array}{} \displaystyle {J_1}^{({r_1} - {r_2})} \oplus \cdots \oplus J_{t - 1}^{({r_{t - 1}} - {r_t})} \oplus J_t^{({r_t})} = {J_1}^{(3 - 3)} \oplus {J_2}^{(3 - 2)} \oplus J_3^{(2 - 1)} \oplus J_4^{(1)} = {J_2} \oplus {J_3} \oplus {J_4}. \end{array}$$

In particular, if

(4)

then we can apply Algorithm 1 using only transformations of permutational similarity and obtain

(all unspecified blocks are zero), which is the Weyr canonical form of (4), see [4].

Regularizing unitary algorithm for matrix pairs under mixed equivalence

We say that pairs of m × n matrices (A,B) and (A′,B′) are mixed equivalent if there exist nonsingular S and R such that

(SAR,SBR¯)=(A,B).$$\begin{array}{} \displaystyle (SAR,SB\bar R) = (A',B'). \end{array}$$

The direct sum of matrix pairs (A,B) and (C,D) is defined as follows:

(A,B)(C,D)=([A00C],[B00D]).$$\begin{array}{} \displaystyle (A,B) \oplus (C,D) = \left( {\left[ {\begin{array}{*{20}{c}} A&0\\ 0&C \end{array}} \right],\:\left[ {\begin{array}{*{20}{c}} B&0\\ 0&D \end{array}} \right]} \right). \end{array}$$

The canonical form of a matrix pair under mixed equivalence was obtained by Djoković [2] (his result was extended to undirected cycles of linear and semilinear mappings in [1]):

Each pair (A,B) of matrices of the same size is mixed equivalent to a direct sum, determined uniquely up to permutation of summands, of pairs of the following types:

(In,Jn(λ)),(In,(01μ0)),(Jn(0),In),(Fn,Gn),(FnT,GnT),$$\begin{array}{} \displaystyle ({I_n},{J_n}(\lambda )),\,({I_n},(\begin{array}{*{20}{c}} 0 & 1 \\ \mu & 0 \\ \end{array})),({J_n}(0),{I_n}),({F_n},{G_n}),\,(F_n^T,G_n^T), \end{array}$$

in which λ ≥ 0 and μ ∉ ℝ or μ < 0.

Thus, (A,B) is mixed equivalent to a direct sum of a pair (A,B) of nonsingular matrices and summands of the types:

(In,Jn(0)),(Jn(0),In),(Fn,Gn),(FnT,GnT),$$\begin{array}{} \displaystyle ({I_n},{J_n}(0)),\,({J_n}(0),{I_n}),({F_n},{G_n}),\,(F_n^T,G_n^T), \end{array}$$

in which (A,B) is determined up to mixed equivalence and the other summands are uniquely determined up to permutation. This sum is called a regularizing decomposition of (A,B). The following algorithm admits to construct a regularizing decomposition using only unitary transformations.

Algorithm 2

Let (A,B) be a pair of matrices of the same size in which the rows of A are linearly dependent. By unitary transformations of rows, we reduce A to the form

S1A=[0A],  S1isunitary$$\begin{array}{} \displaystyle {S_1}A = \left[ {\begin{array}{*{20}{c}} 0\\ {A'} \end{array}} \right],\quad \quad {S_1}\;is\; unitary, \end{array}$$

in which the rows of A′ are linearly independent. These transformations change B:

S1B=[BB].$$\begin{array}{} \displaystyle {S_1}B = \left[ {\begin{array}{*{20}{c}} {B'} \\ {B''} \\ \end{array}} \right]. \end{array}$$

By unitary transformations of columns, we reduce B′ to the form[B1 0]$\begin{array}{} \displaystyle [{B'_1}0] \end{array}$in which the columns ofB1$\begin{array}{} \displaystyle {B'_1} \end{array}$are linearly independent, and obtain

BR1=[B10B1],  R1isunitary.$$\begin{array}{} \displaystyle B{R_1} = \left[ {\begin{array}{*{20}{c}} {B_1^\prime }&0\\ \star &{{B_1}} \end{array}} \right],\quad \quad {R_1}\;is\; unitary. \end{array}$$

These transformations change A:

S1AR1¯=[0k1l10A1].$$\begin{array}{} \displaystyle {S_1}A\overline {{R_1}} = \left[ {\begin{array}{*{20}{c}} {{0_{{k_1}{l_1}}}}&0\\ \star &{{A_1}} \end{array}} \right]. \end{array}$$

We apply the same procedure to (A1,B1) and obtain

(S2A1R2¯,S2B1R2)=([0k2l20A2],[B20B2]),$$\begin{array}{} \displaystyle ({S_2}{A_1}\overline {{R_2}} ,{S_2}{B_1}{R_2}) = \left( {\left[ {\begin{array}{*{20}{c}} {{0_{{k_2}{l_2}}}}&0\\ \star &{{A_2}} \end{array}} \right],\left[ {\begin{array}{*{20}{c}} {{B_{2'}}}&0\\ \star &{{B_2}} \end{array}} \right]} \right), \end{array}$$

in which the rows of [* A2] are linearly independent, S2and R2are unitary, and the columns ofB2$\begin{array}{} \displaystyle B_2^\prime \end{array}$are linearly independent.

We repeat this procedure until we obtain

(StAt1R¯t,StBt1Rt)=([0ktlt0At],[Bt0Bt]),$$\begin{array}{} \displaystyle ({S_t}{A_{t - 1}}{\bar R_t},{S_t}{B_{t - 1}}{R_t}) = \left( {\left[ {\begin{array}{*{20}{c}} {{0_{{k_t}{l_t}}}}&0\\ \star &{{A_t}} \end{array}} \right],\left[ {\begin{array}{*{20}{c}} {B_t^\prime }&0\\ \star &{{B_t}} \end{array}} \right]} \right), \end{array}$$

in which the rows of At are linearly independent. The result of the algorithm is the sequence

(k1,l1), (k2,l2), ... (kt,lt), (At,Bt).

For a matrix pair (A,B) and a nonnegative integer n, we write

(A,B)(n):={(000,000), ifn=0;(A,B)(A,B)(n summands), ifn1.$$\begin{array}{} \displaystyle {(A,B)^{(n)}}: = \{ \begin{array}{*{20}{c}} {({0_{00}},{0_{00}}),} \hfill & {{\rm{if }}n = 0;} \hfill \\ {(A,B) \oplus \ldots \oplus (A,B)(n\; \rm{summands}),} \hfill & {{\rm{if }}n \ge 1.} \hfill \\ \end{array} \end{array}$$

Theorem 2

Let (A,B) be a pair of complex matrices of the same size. Let us apply Algorithm 2 to (A,B) and obtain

(k1,l1), (k2,l2), ..., (kt,lt), (At,Bt).

Let us apply Algorithm 2 to(A¯,B¯):=(BtT,AtT)$\begin{array}{} \displaystyle (,): = (B_t^T,A_t^T) \end{array}$and obtain

(k1,l1), (k2,l2), ..., (kt,lt), (At,Bt).

Then (A,B) is mixed equivalent to

(F1,G1)(k1l1)(Ft1,Gt1)(kt1lt1)(Ft,Gt)(ktlt)(J1,II)(l1k2)(Jt1,It1)(lt1kt)(Jt,It)(lt)(F1T,G1T)(k¯1l¯1)(Ft¯1T,Gt¯1T)(k¯t¯1l¯t¯1)(Ft¯T,Gt¯T)(kt¯lt¯)(I1,J1)(l¯1k¯2)(It¯1,Jt¯1)(l¯t¯1k¯t¯)(It¯,Jt¯)(lt¯)(B¯t¯T,A¯t¯T)$$\begin{array}{} \displaystyle \begin{array}{*{20}{l}} {\begin{array}{*{20}{l}} {{{\left( {{F_1},{G_1}} \right)}^{\left( {{k_1} - {l_1}} \right)}} \oplus \cdot \cdot \cdot \oplus {{\left( {{F_{t - 1}},{G_{t - 1}}} \right)}^{\left( {{k_{t - 1}} - {l_{t - 1}}} \right)}} \oplus {{\left( {{F_t},{G_t}} \right)}^{\left( {{k_t} - {l_t}} \right)}}}\\ { \oplus {{\left( {{J_1},{I_1}} \right)}^{\left( {{l_l} - {k_2}} \right)}} \oplus \cdot \cdot \cdot \oplus {{\left( {{J_{t - 1}},{I_{t - 1}}} \right)}^{\left( {{l_{t - 1}} - {k_t}} \right)}} \oplus {{\left( {{J_t},{I_t}} \right)}^{{l_t}}}}\\ { \oplus {{\left( {{\rm{ }}F_1^T,{\rm{ }}G_1^T} \right)}^{\left( {{k_1} - {l_1}} \right)}} \oplus \cdot \cdot \cdot \oplus {{\left( {{\rm{ }}F_{ - 1}^T,{\rm{ }}G_{ - 1}^T} \right)}^{\left( {{{}_{ - 1}} - {l_{ - 1}}} \right)}} \oplus {{\left( {{\rm{ }}F_{}^T,{\rm{ }}G_{}^T} \right)}^{\left( {{k_{}} - {l_{}}} \right)}}} \end{array}}\\ { \oplus {{\left( {{I_1},{J_1}} \right)}^{\left( {{{}_1} - {{}_2}} \right)}} \oplus \cdot \cdot \cdot \oplus {{\left( {{I_{ - 1}},{J_{ - 1}}} \right)}^{\left( {{{}_{ - 1}} - {{}_{}}} \right)}} \oplus {{\left( {{I_{}},{J_{}}} \right)}^{\left( {{{}_{}}} \right)}}}\\ { \oplus \left( {_{}^T,_{}^T} \right)} \end{array} \end{array}$$

(all exponents in parentheses are nonnegative). The pair(B¯t¯T,A¯t¯T)$\begin{array}{} \displaystyle \left( {_{}^T,_{}^T} \right) \end{array}$consists of nonsingular matrices; it is determined up to mixed equivalence. The other summands are uniquely determined by (A,B).

The rows of At in Theorem 2 are linearly independent, and so the columns of B¯:=AtT$\begin{array}{} \displaystyle : = A_t^T \end{array}$ are linearly independent. As follows from Algorithm 2, the columns of Bt are linearly independent too. Since the rows of At are linearly independent and the columns of Bt are linearly independent, we have that the matrices in (At,Bt) have the same size, these matrices are square, and so they are nonsingular. The pairs (In,JnT)$\begin{array}{} \displaystyle (I_n,J_n^T) \end{array}$ and (GnT,FnT)$\begin{array}{} \displaystyle (G_n^T,F_n^T) \end{array}$ are permutationally equivalent to (In,Jn) and (FnT,GnT)$\begin{array}{} \displaystyle (F_n^T,G_n^T) \end{array}$. Therefore, the following lemma implies Theorem 2.

Lemma 1

Let (A,B) be a pair of complex matrices of the same size. Let us apply Algorithm 2 to (A,B) and obtain

(k1,l1), (k2,l2), ..., (kt,lt), (At,Bt).

Then (A,B) is mixed equivalent to

(F1,G1)(k1l1)(Ft1,Gt1)(kt1lt1)(Ft,Gt)(ktlt)(J1,I1)(l1k2)(Jt1,It1)(lt1kt)(Jt,It)(lt)(At,Bt)$$\begin{array}{} \displaystyle \begin{array}{*{20}{l}} {{{({F_1},{G_1})}^{({k_1} - {l_1})}} \oplus \ldots \oplus {{({F_{t - 1}},{G_{t - 1}})}^{({k_{t - 1}} - {l_{t - 1}})}} \oplus {{({F_t},{G_t})}^{({k_t} - {l_t})}}}\\ { \oplus {{({J_1},{I_1})}^{({l_1} - {k_2})}} \oplus \ldots \oplus {{({J_{t - 1}},{I_{t - 1}})}^{({l_{t - 1}} - {k_t})}}}\\ { \oplus {{({J_t},{I_t})}^{({l_t})}} \oplus ({A_t},{B_t})} \end{array} \end{array}$$

(all exponents in parentheses are nonnegative). The rows of At are linearly independent. The pair (At,Bt) is determined up to mixed equivalence. The other summands are uniquely determined by (A,B).

Proof. We write

(A,B) ⇒ (k1,l1,(A1,B1))

if k1,l1, (A1,B1) are obtained from (A,B) in the first step of Algorithm 2.

First we prove two statements.

Statement 1: If

(A,B)(k1,l1,(A1,B1)),(A˜,B˜)(k˜1,l˜1,(A˜1,B˜1)),$$\begin{array}{} \displaystyle \begin{array}{*{20}{c}} {\left( {A,B} \right) \Rightarrow \left( {{k_1},{l_1},\left( {{A_1},{B_1}} \right)} \right),}\\ {\left( {\widetilde A,\widetilde B} \right) \Rightarrow \left( {{{\widetilde k}_1},{{\widetilde l}_1},\left( {{{\widetilde A}_1},{{\widetilde B}_1}} \right)} \right),} \end{array} \end{array}$$

and (A,B) is mixed equivalent to(A˜,B˜)$\begin{array}{} \displaystyle \left( {\widetilde A,\widetilde B} \right) \end{array}$, then k1=k˜1,l1=l˜1$\begin{array}{} \displaystyle {k_1} = {\tilde k_1},{l_1} = {\tilde l_1} \end{array}$and (A1, B1) is mixed equivalent to (A˜1,B˜1)$\begin{array}{} \displaystyle \left( {{{\widetilde A}_1},{{\widetilde B}_1}} \right) \end{array}$.

Let m be the number of rows in A. Then

k1=mrank A=mrank A˜=k˜1.$$\begin{array}{} \displaystyle {k_1} = m - {\rm{rank }}A = m - {\rm{rank }}\tilde A = {\tilde k_1}. \end{array}$$

Since (A,B) and (A˜,B˜)$\begin{array}{} \displaystyle \left( {\widetilde A,\widetilde B} \right) \end{array}$ are mixed equivalent and they are reduced by mixed equivalence transformations to

([0k1l10XA1],[B10YB1]),([0k1~l˜10X A˜1],[B˜10Y˜ B˜1]),$$\begin{array}{} \displaystyle (\left[ {\begin{array}{*{20}{c}} {{0_{{k_1}{l_1}}}} & 0 \\ X & {{A_1}} \\ \end{array}} \right],[\left[ {\begin{array}{*{20}{c}} {{{B'}_1}} & 0 \\ Y & {{B_1}} \\ \end{array}} \right]]),\quad (\left[ {\begin{array}{*{20}{c}} {{0_{{k_1}{{\tilde l}_1}}}} & 0 \\ {\tilde X} & {{{\tilde A}_1}} \\ \end{array}} \right],\left[ {\begin{array}{*{20}{c}} {{{\tilde B'}_1}} & 0 \\ {\tilde Y} & {{{\tilde B}_1}} \\ \end{array}} \right]), \end{array}$$

there exist nonsingular S and R such that

(S[0k1l10XA1],S[B10YB1])=([0k1~l˜10X A˜1],[B˜10Y˜ B˜1] R˜),$$\begin{array}{} \left(S \begin{bmatrix} 0_{k_1l_1} & 0 \\ X & A_{1} \\ \end{bmatrix},\ S \begin{bmatrix} B_1' & 0 \\ Y & B_{1} \\ \end{bmatrix} \right) =\left( \begin{bmatrix} 0_{{k}_1\tilde{l}_1} & 0 \\ \widetilde{X} & \widetilde{A}_1 \\ \end{bmatrix} R,\ \begin{bmatrix} \widetilde{B}_1' & 0 \\ \widetilde{Y} & \widetilde{B}_1 \\ \end{bmatrix}\bar{R} \right). \end{array}$$

Equating the first matrices of these pairs, we find that S has the form

S=[S110S21S22],  S11 is k1×k1.$$\begin{array}{} \displaystyle S = \left[ {\begin{array}{*{20}{c}} {{S_{11}}}&0\\ {{S_{21}}}&{{S_{22}}} \end{array}} \right],\quad \quad {S_{11}}{\rm{\;is\;}}{k_1} \times {k_1}. \end{array}$$

Equating the second matrices of the pairs (8), we find that

S11[B10]=[B˜10]R˜,$$\begin{array}{} \displaystyle {S_{11}}[{B'_1}0] = [{\tilde B'_1}0]\tilde R, \end{array}$$

and so

l1=rank [B10]=rank [B˜10]=I˜1,$$\begin{array}{} \displaystyle {l_1} = {\rm{rank }}\left[ {{{B'}_1}0} \right] = {\rm{rank }}\left[ {{{\tilde B'}_1}0} \right] = {\tilde I_1}, \end{array}$$

Since B1$\begin{array}{} \displaystyle B_1^\prime \end{array}$ and B˜1$\begin{array}{} \displaystyle {\tilde B'_1} \end{array}$ are k1 × l1 and have linearly independent columns, (9) implies that R is of the form

R=[R110R21R22],  R11 is l1×l1.$$\begin{array}{} \displaystyle R = \left[ {\begin{array}{*{20}{c}} {{R_{11}}}&0\\ {{R_{21}}}&{{R_{22}}} \end{array}} \right],\quad \quad {R_{11}}{\rm{\;is\;}}{l_1} \times {l_1}. \end{array}$$

Equating the (2,2) entries in the matrices (8), we get

S22A1= A˜1R22,  S22B1= B˜1R¯22,$$\begin{array}{} \displaystyle {S_{22}}{A_1} = {\tilde A_1}{R_{22}},\quad \quad {S_{22}}{B_1} = {\tilde B_1}{\bar R_{_{22}}}, \end{array}$$

hence (A1,B1) and (A˜1,B˜1)$\begin{array}{} \displaystyle ({\tilde A_1},{\tilde B_1}) \end{array}$, are mixed equivalent, which completes the proof of Statement 1.

Statement 2: If (6), then

(A,B)(A˜,B˜)(k1+k˜1,l1+l˜1,(A1 A˜1,B1 B˜1)).$$\begin{array}{} \displaystyle (A,B) \oplus (\tilde A,\tilde B) \Rightarrow ({k_1} + {\tilde k_1},{l_1} + {\tilde l_1},({A_1} \oplus {\tilde A_1},{B_1} \oplus {\tilde B_1})). \end{array}$$

Indeed, if (A,B) and (A˜,B˜)$\begin{array}{} \displaystyle (\tilde A,\tilde B) \end{array}$ are reduced to (7), then (A,B)(A˜,B˜)$\begin{array}{} \displaystyle (A,B) \oplus (\tilde A,\tilde B) \end{array}$ is reduced to

([0k1l10k˜1l˜100X X˜A1A˜1],[B1 B˜100Y Y˜B1 B˜1]),$$\begin{array}{} \displaystyle (\left[ {\begin{array}{*{20}{c}} {{0_{{k_1}{l_1}}} \oplus {0_{{{\tilde k}_1}{{\tilde l}_1}}}} & {0 \oplus 0} \\ {X \oplus \tilde X} & {{A_1} \oplus {{\tilde A}_1}} \\ \end{array}} \right],\left[ {\begin{array}{*{20}{c}} {{{B'}_1} \oplus {{\tilde B'}_1}} & {0 \oplus 0} \\ {Y \oplus \tilde Y} & {{B_1} \oplus {{\tilde B}_1}} \\ \end{array}} \right]), \end{array}$$

which is permutationally equivalent to

([0k1l10XA1],[B10YB1])([0k˜1l˜10X˜ A˜1],[B˜10Y˜ B˜1]).$$\begin{array}{} \displaystyle (\left[ {\begin{array}{*{20}{c}} {{0_{{k_1}{l_1}}}} & 0 \\ X & {{A_1}} \\ \end{array}} \right],\left[ {\begin{array}{*{20}{c}} {B_1^\prime } & 0 \\ Y & {{B_1}} \\ \end{array}} \right]) \oplus (\left[ {\begin{array}{*{20}{c}} {{0_{{{\tilde k}_1}{{\tilde l}_1}}}} & 0 \\ {\tilde X} & {{{\tilde A}_1}} \\ \end{array}} \right],\left[ {\begin{array}{*{20}{c}} {\tilde B_1^\prime } & 0 \\ {\tilde Y} & {{{\tilde B}_1}} \\ \end{array}} \right]). \end{array}$$

We are ready to prove Lemma 1 for any pair (A,B). Due to Statement 1, we can replace (A,B) by any mixed equivalent pair. In particular, we can take

(A,B)=(F1,G1)(r1)(Ft,Gt)(rt)(J1,I1)(s1)(Jt,It)(st)(C,D)$$\begin{array}{} \displaystyle (A,B) = {({F_1},{G_1})^{({r_1})}} \oplus \ldots \oplus {({F_t},{G_t})^{({r_t})}} \oplus {({J_1},{I_1})^{({s_1})}} \oplus \ldots \oplus {({J_t},{I_t})^{({s_t})}} \oplus (C,D) \end{array}$$

for some nonnegative t,r1,...,rt,s1,...,rt and some pair (C,D) in which C has linearly independent rows.

Clearly,

(Ji,Ii){(1,1,(Ji1,Ii1)),if i1;(1,1,(000,000)),if i=1,$$\begin{array}{} \displaystyle ({J_i},{I_i}) \Rightarrow \{ \begin{array}{*{20}{l}} {(1,1,({J_{i - 1}},{I_{i - 1}})),}&{{\rm{if\;}}i \ne 1;}\\ {(1,1,({0_{00}},{0_{00}})),}&{{\rm{if\;}}i = 1,} \end{array} \end{array}$$

and

(Fi,Gi){(1,1,(Fi1,Gi1)),if i1;(1,0,(000,000)),if i=1.$$\begin{array}{} \displaystyle ({F_i},{G_i}) \Rightarrow \{ \begin{array}{*{20}{l}} {(1,1,({F_{i - 1}},{G_{i - 1}})),}&{{\rm{if\;}}i \ne 1;}\\ {(1,0,({0_{00}},{0_{00}})),}&{{\rm{if\;}}i = 1.} \end{array} \end{array}$$

Due to Statement 2,

k1 = m – rank A is the number of all summands of the types (Ji,Ii) and (Fi,Gi),

l1 is the number of all summands of the types (Ji,Ii) and (Fi,Gi), except for (F1,G1),

and

(A1,B1)=(F1,G1)(r2)(Ft1,Gt1)(rt)(J1,I1)(s2)(Jt1,It1)(st)(C,D).$$\begin{array}{} \displaystyle ({A_1},{B_1}) = {({F_1},{G_1})^{({r_2})}} \oplus \ldots \oplus {({F_{t - 1}},{G_{t - 1}})^{({r_t})}} \oplus {({J_1},{I_1})^{({s_2})}} \oplus \ldots \oplus {({J_{t - 1}},{I_{t - 1}})^{({s_t})}} \oplus (C,D). \end{array}$$

We find that k1l1 is the number of summands of the type (F1,G1).

Applying the same reasoning to (11) instead of (10) we get that

k2 is the number of all summands of the types (Ji,Ii) and (Fi,Gi) with i ≥ 2,

l1 is the number of all summands of the types (Ji,Ii) with i ≥ 2 and (Fi,Gi) with i ≥ 3,

(A2,B2) = (F1,G1)(r3) ⊕ ··· ⊕ (Ft−2,Gt−2)(rt) ⊕ (J1,I1)(s3) ⊕ ··· ⊕ (C,D).

We find that k2l2 is the number of summands of the type (F2,G2), and that l1k2 is the number of summands of the type (J1,I1), and so on, until we obtain (5).

The fact that the pair (At,Bt) in (5) is determined up to mixed equivalence and the other summands are uniquely determined by (A,B) follows from Statement 1 (or from the canonical form of a matrix pair up to mixed equivalence). This concludes the proof of Lemma 1 and Theorem 1.

eISSN:
2444-8656
Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
Volume Open
Fachgebiete der Zeitschrift:
Biologie, andere, Mathematik, Angewandte Mathematik, Allgemeines, Physik