This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.
Introduction
Van Dooren [7] gave an algorithm that for each pair (A, B) of complex matrices of the same size constructs its regularizing decomposition; that is, it constructs a matrix pair that is simultaneously equivalent to (A, B) and has the form
(A1,B1) ⊕ … ⊕ (At,Bt) ⊕ (A,B)
in which (A,B) is a pair of nonsingular matrices and each other summand has one of the forms:
$$\begin{array}{}
\displaystyle
{F_n}: = \left[ {\begin{array}{*{20}{c}}
0&{}&0\\
1& \ddots &{}\\
{}& \ddots &0\\
0&{}&1
\end{array}} \right],\quad \quad {G_n}: = \left[ {\begin{array}{*{20}{c}}
1&{}&0\\
0& \ddots &{}\\
{}& \ddots &1\\
0&{}&0
\end{array}} \right]
\end{array}$$ are n × (n − 1) matrices; n ≥ 1. Note that (F1,G1) = (010,010); we denote by 0mn the zero matrix of size m × n, where m,n ∈ {0,1,2,...}. The algorithm uses only unitary transformations, which improves its computational stability.
We extend Van Dooren’s algorithm to square complex matrices up to consimilarity transformations $\begin{array}{}
\displaystyle
A \mapsto SA{\bar S^{ - 1}}
\end{array}$ and to pairs of m × n matrices up to transformations $\begin{array}{}
\displaystyle
(A,B) \mapsto (SAR,SB\bar R)
\end{array}$, in which S and R are nonsingular matrices.
A regularizing algorithm for matrices of undirected cycles of linear mappings was constructed by Sergeichuk [6] and, independently, by Varga [8]. A regularizing algorithm for matrices under congruence was constructed by Horn and Sergeichuk [5].
All matrices that we consider are complex matrices.
Regularizing unitary algorithm for matrices under consimilarity
Two matrices A and B are consimilar if there exists a nonsingular matrix S such that $\begin{array}{}
\displaystyle
SA{\bar S^{ - 1}} = B
\end{array}$. Two matrices are consimilar if and only if they represent the same semilinear operator, but in different bases. Recall that a mapping 𝒜 : U → V between complex vector spaces is semilinear if
The canonical form of a matrix under consimilarity is the following (see [3] or [4]):
Each square complex matrix is consimilar to a direct sum, uniquely determined up to permutation of direct summands, of matrices of the following types:
Thus, each square matrix A is consimilar to a direct sum
$$\begin{array}{}
J_{n_1}(0)\oplus\dots\oplus J_{n_k}(0)\oplus \underline A,
\end{array}$$
in which A is nonsingular and is determined up to consimilarity; the other summands are uniquely determined up to permutation. This sum is called a regularizing decomposition of A. The following algorithm admits to construct a regularizing decomposition using only unitary transformations.
Algorithm 1
Let A be a singular n × n matrix. By unitary transformations of rows, we reduce it to the form
in which Jk := Jk(0) and At is determined by A up to consimilarity and the other summands are uniquely determined.
Proof. Let 𝒜 : V → V be a semilinear operator whose matrix in some basis is A. Let W := 𝒜 V be the image of 𝒜. Then the matrix of the restriction 𝒜1 : W → W of 𝒜 on W is A1. Applying Algorithm 1 to A1, we get the sequence r2,...,rt,At. Reasoning by induction on the length t of the algorithm, we suppose that r2 ≥ r3 ≥ ··· ≥ rt and that A1 is consimilar to
for all p = 1,...,t and q = 1,...,rp − rp+1. This completes the proof of Theorem 1.
Example 1
Let a square matrix A define a semilinear operator 𝒜 : V → V and let the singular part of its regularizing decomposition be J2 ⊕ J3 ⊕ J4. This means that V possesses a set of linear independent vectors forming the Jordan chains
On the second step, we delete e2, f2,g2and so on. Thus, ri is the number of vectors in the ith column of (3): r1 = 3, r2 = 3, r3 = 2,r4 = 1. We get the singular part of regularizing decomposition of A:
The canonical form of a matrix pair under mixed equivalence was obtained by Djoković [2] (his result was extended to undirected cycles of linear and semilinear mappings in [1]):
Each pair (A,B) of matrices of the same size is mixed equivalent to a direct sum, determined uniquely up to permutation of summands, of pairs of the following types:
in which (A,B) is determined up to mixed equivalence and the other summands are uniquely determined up to permutation. This sum is called a regularizing decomposition of (A,B). The following algorithm admits to construct a regularizing decomposition using only unitary transformations.
Algorithm 2
Let (A,B) be a pair of matrices of the same size in which the rows of A are linearly dependent. By unitary transformations of rows, we reduce A to the form
By unitary transformations of columns, we reduce B′ to the form$\begin{array}{}
\displaystyle
[{B'_1}0]
\end{array}$in which the columns of$\begin{array}{}
\displaystyle
{B'_1}
\end{array}$are linearly independent, and obtain
in which the rows of [* A2] are linearly independent, S2and R2are unitary, and the columns of$\begin{array}{}
\displaystyle
B_2^\prime
\end{array}$are linearly independent.
(all exponents in parentheses are nonnegative). The pair$\begin{array}{}
\displaystyle
\left( {_{}^T,_{}^T} \right)
\end{array}$consists of nonsingular matrices; it is determined up to mixed equivalence. The other summands are uniquely determined by (A,B).
The rows of At in Theorem 2 are linearly independent, and so the columns of
$\begin{array}{}
\displaystyle
: = A_t^T
\end{array}$ are linearly independent. As follows from Algorithm 2, the columns of Bt are linearly independent too. Since the rows of At are linearly independent and the columns of Bt are linearly independent, we have that the matrices in (At,Bt) have the same size, these matrices are square, and so they are nonsingular. The pairs $\begin{array}{}
\displaystyle
(I_n,J_n^T)
\end{array}$ and $\begin{array}{}
\displaystyle
(G_n^T,F_n^T)
\end{array}$ are permutationally equivalent to (In,Jn) and $\begin{array}{}
\displaystyle
(F_n^T,G_n^T)
\end{array}$. Therefore, the following lemma implies Theorem 2.
Lemma 1
Let (A,B) be a pair of complex matrices of the same size. Let us apply Algorithm 2 to (A,B) and obtain
(all exponents in parentheses are nonnegative). The rows of At are linearly independent. The pair (At,Bt) is determined up to mixed equivalence. The other summands are uniquely determined by (A,B).
Proof. We write
(A,B) ⇒ (k1,l1,(A1,B1))
if k1,l1, (A1,B1) are obtained from (A,B) in the first step of Algorithm 2.
and (A,B) is mixed equivalent to$\begin{array}{}
\displaystyle
\left( {\widetilde A,\widetilde B} \right)
\end{array}$, then $\begin{array}{}
\displaystyle
{k_1} = {\tilde k_1},{l_1} = {\tilde l_1}
\end{array}$and (A1, B1) is mixed equivalent to $\begin{array}{}
\displaystyle
\left( {{{\widetilde A}_1},{{\widetilde B}_1}} \right)
\end{array}$.
Let m be the number of rows in A. Then
$$\begin{array}{}
\displaystyle
{k_1} = m - {\rm{rank }}A = m - {\rm{rank }}\tilde A = {\tilde k_1}.
\end{array}$$
Since (A,B) and $\begin{array}{}
\displaystyle
\left( {\widetilde A,\widetilde B} \right)
\end{array}$ are mixed equivalent and they are reduced by mixed equivalence transformations to
Since $\begin{array}{}
\displaystyle
B_1^\prime
\end{array}$ and $\begin{array}{}
\displaystyle
{\tilde B'_1}
\end{array}$ are k1 × l1 and have linearly independent columns, (9) implies that R is of the form
hence (A1,B1) and $\begin{array}{}
\displaystyle
({\tilde A_1},{\tilde B_1})
\end{array}$, are mixed equivalent, which completes the proof of Statement 1.
Indeed, if (A,B) and $\begin{array}{}
\displaystyle
(\tilde A,\tilde B)
\end{array}$ are reduced to (7), then $\begin{array}{}
\displaystyle
(A,B) \oplus (\tilde A,\tilde B)
\end{array}$ is reduced to
We find that k2 − l2 is the number of summands of the type (F2,G2), and that l1 − k2 is the number of summands of the type (J1,I1), and so on, until we obtain (5).
The fact that the pair (At,Bt) in (5) is determined up to mixed equivalence and the other summands are uniquely determined by (A,B) follows from Statement 1 (or from the canonical form of a matrix pair up to mixed equivalence). This concludes the proof of Lemma 1 and Theorem 1.