This work is licensed under the Creative Commons Attribution 4.0 International License.
Introduction
The modern concept of the matrix was gradually formed in the 19th century. In 1801, German mathematician Gauss took all the coefficients of a linear transformation. In 1844, German mathematician Eisenstein discussed ‘transformation’ (matrix) and its product. In 1850, the British mathematician Sylvester first used the term matrix. In 1858, British mathematician Gloria published ‘Research Report on Matrix Theory’. He first studied matrices as an independent mathematical object and first published a series of articles on this subject. Therefore, he was considered the founder of matrix theory. He gave a series of definitions that are commonly used now, such as two matrices, equality, zero matrix, sum of two matrices, quantity product of a number and a matrix, product of two matrices, matrix inverse, transposed matrix, etc [1]. Kelley also noticed that matrix multiplications are combinable but generally not commutative and m ∗ n matrix can only be multiplied by n ∗ k matrix. In 1854, the French mathematician Emilt used the term ‘orthogonal matrix’, but his formal definition was not published by the German mathematician Ferrous until 1878. In 1879, Ferrous introduced the concept of matrix rank. Matrix is an important basic concept in mathematics. It is a main research object of algebra and an important tool for applied mathematics research.
Properties of matrix rank
The maximum order r of a subexpression that is not equal to zero in a matrix A is called the rank of the matrix. If a matrix has no subforums that are not equal to zero, the rank of the matrix is zero, which is called as rank (A) = r.
Invariance of the rank of a matrix
The ranks of the transposed matrices are equal, that is, rank (AT) = rank (A). The elementary transformation does not change the rank of the matrix.
Proof 1
Suppose that the i row and j row of a matrix A = (aij)m×n are exchanged to obtain the matrix B:
A = \left({\matrix{{{a_{11}}} & \cdots & {{a_{1n}}} \cr \vdots & {} & \vdots \cr {{a_{i1}}} & \cdots & {{a_{in}}} \cr \vdots & {} & \vdots \cr {{a_{j1}}} & \cdots & {{a_{jn}}} \cr \vdots & {} & \vdots \cr {{a_{m1}}} & \cdots & {{a_{mn}}} \cr}} \right),\;B = \left({\matrix{{{a_{11}}} & \cdots & {{a_{1n}}} \cr \vdots & {} & \vdots \cr {{a_{j1}}} & \cdots & {{a_{jn}}} \cr \vdots & {} & \vdots \cr {{a_{i1}}} & \cdots & {{a_{in}}} \cr \vdots & {} & \vdots \cr {{a_{m1}}} & \cdots & {{a_{mn}}} \cr}} \right)
And the rank of matrix A is r. Obviously, the rank of matrix B is also r. Let matrix B have s-order sub-form D, s > r. If D does not contain elements of rows i and j at the same time, then D is an order s of the matrix A and then D = 0; if D contains elements of rows i and j at the same time, this is given as follows:
D = \left| {\matrix{{{a_{l{t_{_1}}}}} & \cdots & {{a_{l{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{j{t_1}}}} & \cdots & {{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}}} & \cdots & {{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{k{t_1}}}} & \cdots & {{a_{k{t_s}}}} \cr}} \right| = \left| {\matrix{{{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}}} & \cdots & {{a_{i{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{j{t_1}}}} & \cdots & {{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{k{t_1}}}} & \cdots & {{a_{k{t_s}}}} \cr}} \right| = 0
This shows that rank (A) ≥ rank (B). And we can also exchange matrix B for rows i and j to get matrix A, then rank (A) ≤ rank (B). So, rank (A) = rank (B). It can be proved that the first elementary transformation does not change the rank of the matrix.
Suppose that row i of matrix A is multiplied by a number k that is not equal to zero to obtain matrix B. Let matrix B have an s-order sub-formula G D, s > r. If D does not contain elements of the i row, then D is an s order sub-formula of the matrix A and then D = 0; if D contains elements of the i throw, then the following is obtained:
D = \left| {\matrix{{{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr \vdots & {} & \vdots \cr {k{a_{i{t_1}}}} & \cdots & {k{a_{i{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{k{t_1}}}} & \cdots & {{a_{k{t_s}}}} \cr}} \right| = k\left| {\matrix{{{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}}} & \cdots & {{a_{i{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{k{t_1}}}} & \cdots & {{a_{k{t_s}}}} \cr}} \right| = 0
So, A rank (A) ≥ rank (B). And multiply the i row of matrix B by
{1 \over k}
to obtain matrix A, then rank (A) ≤ rank (B). So, rank (A) = rank (B). It can be proved that the second elementary transformation does not change the rank of the matrix.
Let's multiply the j row of a matrix A by the number k and add it to the i row to get the matrix B:
A = \left({\cdots \matrix{{{a_{11}}} & \cdots & {{a_{1n}}} \cr \vdots & {} & \vdots \cr {{a_{i1}}} & \cdots & {{a_{in}}} \cr \vdots & {} & \vdots \cr {{a_{j1}}} & \cdots & {{a_{jn}}} \cr \vdots & {} & \vdots \cr {{a_{m1}}} & \cdots & {{a_{mn}}} \cr}} \right)\;B = \left({\matrix{{{a_{11}}} & \cdots & {{a_{1n}}} \cr \vdots & {} & \vdots \cr {{a_{i1}}} & \cdots & {{a_{in}} + k{a_{jn}}} \cr \vdots & {} & \vdots \cr {{a_{j1}}} & \cdots & {{a_{jn}}} \cr \vdots & {} & \vdots \cr {{a_{m1}}} & \cdots & {{a_{mn}}} \cr}} \right)
And the rank of A is r, we have to prove that the rank of B is also r. We first prove that the rank of B cannot exceed r. If the matrix B has no sub-forms of order greater than r, then of course it does not have sub-forms of order greater than r which are not equal to zero; so, its rank obviously cannot exceed r. Let matrix B have s order sub-form D, and s > r. Then, there are three possible situations.
If D does not contain an element in line i. At this time, D is also a sub form of A, and the rank of the matrix A is r, but s > r, from this, D = 0.
If D contains elements in line i and elements in line j. Currently, there is
D = \left| {\matrix{{{a_{h{t_1}}}} & \cdots & {{a_{h{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}} + k{a_{j{t_1}}}} & \cdots & {{a_{i{t_s}}} + k{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{j{t_1}}}} & \cdots & {{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr}} \right| = \left| {\matrix{{{a_{h{t_1}}}} & \cdots & {{a_{h{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}}} & \cdots & {{a_{i{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{j{t_1}}}} & \cdots & {{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr}} \right| = 0
If D contains elements in line i but does not contain elements in line j. Currently
D = \left| {\matrix{{{a_{h{t_1}}}} & \cdots & {{a_{h{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}} + k{a_{j{t_1}}}} & \cdots & {{a_{i{t_s}}} + k{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr}} \right| = \left| {\matrix{{{a_{h{t_1}}}} & \cdots & {{a_{h{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}}} & \cdots & {{a_{i{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr}} \right| + k\left| {\matrix{{{a_{h{t_1}}}} & \cdots & {{a_{h{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{j{t_1}}}} & \cdots & {{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr}} \right| = {D_1} + k{D_2}
Since D1 and D2 are a s-order sub-form of the matrix A, D1 = 0, D2 = 0. Thus, D = 0.
From the above three cases, we can see that all the sub-forms of matrix B are greater than r. Therefore, the rank of matrix B is not greater than r. Both rank (A) ≥ rank (B). Similarly, we can also perform elementary transformation on the matrix B to obtain the matrix A so that we can get rank (A) ≤ rank (B). In this way, we prove that, rank (A) = rank (B), the third elementary transformation does not change the rank of the matrix. The above three points prove that the elementary transformation does not change the rank of the matrix. Certificate completed [2, 3].
In fact, performing a row or column elementary transformation is equivalent to multiplying this matrix to the left or right by an invertible matrix. Let A be a m × n matrix:
A = \left({\matrix{{{a_{11}}} & {{a_{12}}} & \cdots & {{a_{1n}}} \cr {{a_{21}}} & {{a_{22}}} & \cdots & {{a_{2n}}} \cr \vdots & \vdots & {} & \vdots \cr {{a_{m1}}} & {{a_{m2}}} & \cdots & {{a_{mn}}} \cr}} \right)
A can be transformed into a staircase type by row elementary transformation and the first column elementary transformation:
J = \left({\matrix{1 & * & * & \cdots & * & * & \cdots & * \cr 0 & 1 & * & \cdots & * & * & \cdots & * \cr \vdots & \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots \cr 0 & 0 & 0 & \cdots & 1 & * & \cdots & * \cr 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \cr \vdots & \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots \cr 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \cr}} \right)
And further becomes:
\left({\matrix{1 & 0 & 0 & \cdots & 0 & {{c_{1,r + 1}}} & \cdots & {{c_{1n}}} \cr 0 & 1 & 0 & \cdots & 0 & {{c_{2,r + 1}}} & \cdots & {{c_{2n}}} \cr \vdots & \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots \cr 0 & 0 & 0 & \cdots & 1 & {{c_{r,r + 1}}} & \cdots & {{c_{rn}}} \cr 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \cr \vdots & \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots \cr 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \cr}} \right) = \left({\matrix{{{I_r}} & {{C_{r \times \left({n - r} \right)}}} \cr 0 & 0 \cr}} \right)
If the elements aij of matrix A are all equal to zero, then A has the form of J. Suppose that aij is not equal to zero. If necessary, swap the rows and columns of the matrix, and the element can be the top left corner of the matrix. Multiply the first line by
{1 \over {{a_{ij}}}}
and then subtract the appropriate multiples of the first line from the remaining lines. Matrix A becomes
Proof
B = \left({\matrix{1 & * & \cdots & * \cr 0 & * & \cdots & * \cr \vdots & \vdots & {} & \vdots \cr 0 & * & \cdots & * \cr}} \right)
If in B, except for the first line, the elements of all other lines are zero, then B has the form of J. Suppose there is an element b in the last m − 1 rows of B that is not equal to zero. Change b to the position of the focus of the second row and second column and then use the same method as above to convert B to
\left({\matrix{1 & * & * & \cdots & * \cr 0 & 1 & * & \cdots & * \cr 0 & 0 & * & \cdots & * \cr \vdots & \vdots & \vdots & {} & \vdots \cr 0 & 0 & * & \cdots & * \cr}} \right)
. So, continue and finally get a matrix of the form J [4, 5].
We only need to further subtract the appropriate multiples of line r from the first, second, third and line r − 1 and subtract the appropriate multiples of line r − 1 from the first, second and r − 2 lines, respectively. Continue to get a matrix of the form
\left({\matrix{{{I_r}} & {{C_{r,n - r}}} \cr 0 & 0 \cr}} \right)
.
In fact, the matrix is transformed into a step shape by elementary transformation, and the rank of the number of non-zero rows in the step shape matrix is the rank of the matrix. This results in another equivalent definition of matrix rank:
The number of non-zero rows in the staircase formed by matrix A = (aij)m×n after elementary transformation becomes the rank of the matrix. The rank of matrix A is r and denoted by R(A) = r. In particular, the rank R(O) = 0 of the zero matrix is 0.
The elementary transformation is used to transform the matrix A into the equivalent standard form
I = \left({\matrix{{{E_r}} & 0 \cr 0 & 0 \cr}} \right)
. Since the elementary transformation does not change the rank of the matrix, rank (A) = r. From this, the following theorem is obtained:
Any matrix A can be reduced to the form
I = \left({\matrix{{{E_r}} & 0 \cr 0 & 0 \cr}} \right)
, and I is called the equivalent standard form of A, and rank (A) = r. Similar matrices have the same rank, and matrices with the same rank are similar. A ∼ B ⇔ rank (A) = rank (B).
Proof
If the matrix A ∼ B is similarly defined, it can be seen that there is an invertible matrix T so that B = T−1AT because the matrix is multiplied with the invertible matrix, the rank is unchanged (there is a proof below); so, rank (A) = rank (T1A) = rank (T−1AT) = rank (B).
If rank (A) = rank (B), suppose the equivalent standard form of A is IA, then the equivalent standard form of A ∼ IA and B, is IB; then, B ∼ IB, and rank (A) = rank (B); then, IA = IB. From the similar transitivity, we know A ∼ B. The proof is complete. If the Jordan canonical form of both matrices A and B is J, then rank (A) = rank (B).
Proof
Set
J = \left({\matrix{{{J_1}} & {} & {} \cr {} & \ddots & {} \cr {} & {} & {{J_{_S}}} \cr}} \right)
,
{J_i} = {\left({\matrix{{{\lambda _i}} & 1 & {} & {} \cr {} & {{\lambda _i}} & \ddots & {} \cr {} & {} & \ddots & 1 \cr {} & {} & {} & {{\lambda _i}} \cr}} \right)_{{n_i} \times {n_i}}}
, (i = 1, 2, … s). Then, the elementary factors of the matrix A and the matrix B are both (λ − λ1)n1,(λ − λ2)n2,⋯(λ − λs)ns, so A ∼ B. Known from theorem, rank (A) = rank (B). After the proof is obtained, we can get the following:
If the rank of matrix A is r, the rank of its Jordan canonical form is also r. That is, the ranks of the three canonical forms of the matrix are unchanged.
The matrix of contracts has the same rank. That is, if A ≅ B, then rank (A) = rank (B).
Proof
If A ≅ B, then there is a reversible matrix C such that CT AC = B is known from the property 1.1.1, and rank CT = rank (C), that is, CT is also invertible. From 1.3.4,
rank\left({{C^T}AC} \right) = rank\left({{C^T}A} \right) = rank\left(A \right)
If the block matrix A is transformed into a matrix B through the elementary transformation of a finite number of block matrices, the rank of the matrix is unchanged.
Some basic properties of the rank of a matrix
0 ≤ rank (Am×n) ≤ min{m,n}. rank (A) = rank (AT). Matrix A is divided into several rows (columns) to obtain matrix B, then rank (A) ≥ rank (B). Let A be a square matrix of order n(n ≥ 2), then
rank\left({{A^*}} \right) = \left\{{\matrix{n \hfill & {rank\left(A \right) = n} \hfill \cr 1 \hfill & {rank\left(A \right) = n - 1} \hfill \cr 0 \hfill & {rank\left(A \right) < n - 1} \hfill \cr}} \right.
.
Matrix rank and matrix operations
rank\left({kA} \right) = \left\{{\matrix{{rank\left(A \right),k \ne 0} \hfill \cr {0,k = 0} \hfill \cr}} \right..\,\,rank\left({\matrix{A & 0 \cr 0 & B \cr}} \right) = rank\left(A \right) + rank\left(B \right).\,\,rank\left({\matrix{A & 0 \cr C & B \cr}} \right) \ge rank\left(A \right) + rank\left(B \right)
. rank (A × B) ≤ min{rank (A), rank (B)}. In particular, if A is reversible, rank (A × B) = rank (B).
Proof
Let A be a m × n matrix, B be a n × p matrix, and rank (A) = r, let A be the equivalent standard form
{I_A} = \left({\matrix{{{E_r}} & {{0_{r,n - r}}} \cr {{0_{m - r,r}}} & {{0_{m - r,n - r}}} \cr}} \right)
In other words, there is an elementary matrix E1,E2,⋯, Ep of order m and an elementary matrix Ep+1,Ep+2,⋯, Eq of order n such that
{E_1} \cdots {E_p}A{E_{p + 1}} \cdots {E_q} = {I_A}
So, have:
{E_1} \cdots {E_q}AB = {E_1} \cdots {E_p}A{E_{p + 1}} \cdots {E_q}E_{p + 1}^{- 1} \cdots E_q^{- 1}B = {I_A}E_{p + 1}^{- 1} \cdots E_q^{- 1}B = {I_A}{B_1}
Here,
{B_1} = E_{p + 1}^{- 1} \cdots E_q^{- 1}
. Obviously, except for the first r lines, IAB1. Obviously, the remaining rows are zero, and so, rank (IAB1) ≤ r. E1···EqAB is obtained by AB through elementary transformation; so, they have the same rank, which proves that rank (AB) ≤ rank (A). Similarly, it can be proved that rank (AB) ≤ rank (B). If one of A, B is an invertible matrix, let's set A to be invertible. Then, on the one hand, we know from the above proof process that rank (AB) ≤ rank (B) and B = A−1 (AB) and s rank (B) ≤ rank (AB). Therefore, rank (AB) = rank (B) Certificate completed.
This property is extended to the case of the product of any m matrix. The rank of the product of any m matrices is not greater than the rank of each factor. If matrices A and B are homogeneous, then rank (A ± B) ≤ rank (A) ± rank (B).
Proof
First prove rank (A + B) ≤ rank (A) + rank (B). due to
\left({\matrix{A & B \cr 0 & B \cr}} \right)\left({\matrix{{{E_n}} & 0 \cr {{E_n}} & 0 \cr}} \right) = \left({\matrix{{A + B} & 0 \cr B & 0 \cr}} \right)
And so:
rank\left({A + B} \right) \le rank\left({\matrix{{A + B} & 0 \cr B & 0 \cr}} \right) = rank\left({\left({\matrix{A & B \cr 0 & B \cr}} \right)\left({\matrix{{{E_n}} & 0 \cr {{E_n}} & 0 \cr}} \right)} \right) \le rank\left({\matrix{A & B \cr 0 & B \cr}} \right) \le rank\left(A \right) + rank\left(B \right).
and so, rank (A) = rank (A − B + B) ≤ rank (A − B) + rank (B), and so, rank (A ± B) ≤ rank (A) ± rank (B).
Some inequality equations about the ranks of matrices and their applications
(Sylvester inequality) Let A be the s × n matrix and B be the n × m matrix, then rank (AB) ≥ rank (A) + rank (B) − n
Proof
(Proven using a block matrix) Since
\left({\matrix{{AB} & 0 \cr 0 & {{E_n}} \cr}} \right)\buildrel {b{r_1} + A \times {r_2}} \over \longrightarrow \left({\matrix{{AB} & A \cr 0 & {{E_n}} \cr}} \right)\buildrel {b{c_1} + b{c_2} \times B} \over \longrightarrow \left({\matrix{0 & A \cr {- B} & {{E_n}} \cr}} \right),
And so:
rank\left({\matrix{{AB} & 0 \cr 0 & {{E_n}} \cr}} \right) \ge rank\left({\matrix{0 & A \cr {- B} & {{E_n}} \cr}} \right)
, which is rank (AB) + n ≥ rank (A) + rank (B), get rank (AB) ≥ rank (A) + rank (B) − n. Certificate completed [6, 7].
If the matrices A and B are n × n matrices and AB = 0, the rank (A) + rank (B) ≤ n. (Sylvester inequality) Let A, B and C in turn be m × n, n × s and s × t matrixes, then rank (ABC) ≥ rank (AB) + rank (BC) − rank (B).
Prove: because
\left({\matrix{{AB} & {ABC} \cr B & 0 \cr}} \right)\left({\matrix{{{I_s}} & C \cr 0 & {- {I_t}} \cr}} \right) = \left({\matrix{{AB} & 0 \cr B & {BC} \cr}} \right)
, with property 1.1.3 available:
rank\left({AB} \right) + rank\left({BC} \right) \le rank\left({\matrix{{AB} & 0 \cr B & {BC} \cr}} \right) \le rank\left({\matrix{{AB} & {ABC} \cr B & 0 \cr}} \right) \le rank\left({\matrix{0 & {ABC} \cr B & 0 \cr}} \right) = rank\left({ABC} \right) + rank\left(B \right)
Get rank (ABC) ≥ rank (AB) + rank (BC) − rank (B). Certificate completed. Let matrix A, B and B be matrix of order n, then rank (AB − In) = rank (A − In) = rank (B − In).
due to (f (x),h(x)) = 1, and so (f (x)g(x),g(x)h(x)) = g(x), m(x) = f (x)g(x)h(x). From Theorem 1.4.3, we get:
rank\left({f\left(A \right)g\left(A \right)} \right) + rank\left({g\left(A \right)h\left(A \right)} \right) = rank\left({f\left(A \right)g\left(A \right)h\left(A \right)} \right) + rank\left({g\left(A \right)} \right).
Certificate completed.
Assume A ∈ Fn×n, fi (x), gj (x) ∈ P[x], and (fi (x),gj (x)) = 1, 1 ≤ i ≤ m, 1 ≤ j ≤ t, then
rank\left({\prod\limits_{i = 1}^m {f_i}\left(A \right)} \right) + rank\left({\prod\limits_{j = 1}^t {g_j}\left(A \right)} \right) = n + rank\left({\prod\limits_{i = 1}^m {f_i}\left(A \right)\prod\limits_{j = 1}^t {g_j}\left(A \right)} \right).
Proof
due to (fi (x),gj (x)) = 1, then d(A) = E,
m\left(A \right) = \prod\limits_{i = 1}^m {f_i}\left(A \right)\prod\limits_{j = 1}^t {g_j}\left(A \right)
. From Theorem 1.4.3, we get:
rank\left({\prod\limits_{i = 1}^m {f_i}\left(A \right)} \right) + rank\left({\prod\limits_{j = 1}^t {g_j}\left(A \right)} \right) = n + rank\left({\prod\limits_{i = 1}^m {f_i}\left(A \right)\prod\limits_{j = 1}^t {g_j}\left(A \right)} \right).
Analysis of typical examples:
Example 1
Let A be a matrix of order n and A2 = A, prove that: rank (A) + rank (A − E) = n E, is a matrix of order n.
Proof
Order f (x) = x, g(x) = x − 1, then (f (x),g(x)) = 1, and A2 = A, then A2 − A = 0. So applying the theorem, we get\matrix{{rank\left(A \right) + rank\left({A - E} \right) = rank\left({f\left(A \right)} \right) + rank\left({g\left(A \right)} \right)} \hfill \cr {= n + rank\left({f\left(A \right)g\left(A \right)} \right) = n + rank\left({A\left({A - E} \right)} \right) = n + rank\left({{A^2} - A} \right) = n + 0 = n.} \hfill \cr}
Example 2
Let A be a matrix of order n and A2 = E, prove rank (A + E) + rank (A − E) = n.
Proof
Order f (x) = x + 1, g(x) = x − 1, then (f (x),g(x)) = 1. Applying the theorem, we getrank\left({A + E} \right) + rank\left({A - E} \right) = n + rank\left({\left({A + E} \right)\left({A - E} \right)} \right) = n + rank\left({{A^2} - E} \right) = n + rank\left(0 \right) = n + 0 = n.
Example 3
Let A ∈ Fn×n, n be a positive integer, then for any positive integer l, k, there are:
rank (Al) + rank(Am − E)k = n, in case Am+1 = A;
rank(A − E)l + rank (Am−1 + Am−2 + ⋯ + A + E)k = n, in case Am = E.
prove:
w f (x) = xl, g(x) = (xm − 1)k, then (f (x),g(x)) = 1. Applying the theorem,
\matrix{{rank\left({{A^l}} \right) + rank{{\left({{A^m} - E} \right)}^k} = rank\left({f\left(A \right)} \right) + rank\left({g\left(A \right)} \right)} \hfill \cr {= n + rank\left({f\left(A \right)g\left(A \right)} \right) = n + rank\left({{A^l}{{\left({{A^m} - E} \right)}^k}} \right) = n + rank\left({{A^{l - 1}}\left({{A^{m + 1}} - A} \right){{\left({{A^m} - E} \right)}^{k - 1}}} \right)} \hfill \cr {= n + rank\left({{A^{l - 1}}0{{\left({{A^m} - E} \right)}^k}} \right) = n + rank\left(0 \right) = n} \hfill \cr}
The above three examples are very cumbersome to solve with knowledge of zeroization polynomials, but it is very simple and easy to understand using the Sylvester inequality. Matrix inequality has good applications in solving problems, and this article will not explain them one by one.
The rank and invertibility of matrices
For any matrix n of order A, the following three terms are equivalent (1) matrix A is invertible; (2) rank (A) = n; (3) detA ≠ 0. The row rank, column rank and rank of the matrix are equal. Let A be m × n the order matrix and P be m. Order invertible matrix, Q is n order matrix, then rank (PAQ) = rank (AQ) = rank (PA) = rank (A).
Application of matrix rank in linear algebra
Let n be a linear equation set AX = B, where
A = \left({\matrix{{{a_{11}}} & {{a_{12}}} & \cdots & {{a_{1n}}} \cr {{a_{21}}} & {{a_{22}}} & \cdots & {{a_{2n}}} \cr \vdots & \vdots & {} & \vdots \cr {{a_{m1}}} & {{a_{m2}}} & \cdots & {{a_{mn}}} \cr}} \right),\;B = \left({\matrix{{{b_1}} \cr {{b_2}} \cr \vdots \cr {{b_m}} \cr}} \right)
Let its augmentation matrix be
\overline A = \left({\matrix{{{a_{11}}} & {{a_{12}}} & \cdots & {{a_{1n}}} & {{b_1}} \cr {{a_{21}}} & {{a_{22}}} & \cdots & {{a_{2n}}} & {{b_2}} \cr \vdots & \vdots & {} & \vdots & \vdots \cr {{a_{m1}}} & {{a_{m2}}} & \cdots & {{a_{mn}}} & {{b_m}} \cr}} \right)
Then, there are the following assumptions: (1) The system of equations AX = B has no solution if and only if rank (A) < rank (Ā); (2) The system of equations AX = B has unique solution if and only if rank (A) < rank (Ā) = n; (3) The system of equations AX = B has infinitely many solutions if and only if rank (A) < rank (Ā) < n.
Proof
Use the elementary transformation indicated by Lemma 1.1.1 above to turn A and Ā into:
B = \left({\matrix{1 & 0 & \cdots & 0 & {{c_{1,r + 1}}} & \cdots & {{c_{1n}}} \cr 0 & 1 & \cdots & 0 & {{c_{2,r + 1}}} & \cdots & {{c_{2n}}} \cr \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots \cr 0 & 0 & \cdots & 1 & {{c_{r,r + 1}}} & \cdots & {{c_{rn}}} \cr 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \cr \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots \cr 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \cr}} \right),\;\overline B = \left({\matrix{1 & 0 & \cdots & 0 & {{c_{1,r + 1}}} & \cdots & {{c_{1n}}} & {{d_1}} \cr 0 & 1 & \cdots & 0 & {{c_{2,r + 1}}} & \cdots & {{c_{2n}}} & {{d_2}} \cr \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots & \vdots \cr 0 & 0 & \cdots & 1 & {{c_{r,r + 1}}} & \cdots & {{c_{rn}}} & {{d_r}} \cr 0 & 0 & \cdots & 0 & 0 & \cdots & 0 & {{d_{r + 1}}} \cr \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots & \vdots \cr 0 & 0 & \cdots & 0 & 0 & \cdots & 0 & {{d_m}} \cr}} \right)
Since the elementary transformation does not change the rank of the matrix,
rank\left(A \right) = rank\left(B \right) = r,\;rank\left({\overline A} \right) = rank\left({\overline B} \right),\;rank\left(A \right) \le rank\left({\overline A} \right)
Now suppose that the system of linear equations AX = B has a solution, that is, there is dr+1 = dr+2 = ⋯ = dm = 0 at this time, and either r < m, or r = m, both of which have
rank\left({\overline A} \right) = rank\left({\overline B} \right) = r
, so rank (A) < rank (Ā) = r. If the system of equations has only one solution, then the number of free unknowns is zero, then r = n.
Conclusion
The rank of a matrix is a basic concept and one of the most important quantitative characteristics of a matrix. It is an invariant under elementary transformation. The rank of a matrix is an important concept that reflects the inherent characteristics of a matrix. It has an important role in linear algebra, analytic geometry and even probability theory.