1. bookAHEAD OF PRINT
Zeitschriftendaten
License
Format
Zeitschrift
eISSN
2444-8656
Erstveröffentlichung
01 Jan 2016
Erscheinungsweise
2 Hefte pro Jahr
Sprachen
Englisch
access type Uneingeschränkter Zugang

Analysis of the properties of matrix rank and the relationship between matrix rank and matrix operations

Online veröffentlicht: 30 Dec 2021
Volumen & Heft: AHEAD OF PRINT
Seitenbereich: -
Eingereicht: 17 Jun 2021
Akzeptiert: 24 Sep 2021
Zeitschriftendaten
License
Format
Zeitschrift
eISSN
2444-8656
Erstveröffentlichung
01 Jan 2016
Erscheinungsweise
2 Hefte pro Jahr
Sprachen
Englisch
Abstract

This paper discusses the invariance of the rank of a matrix, the conditions and applications of matrix inequalities, the relationship between matrix rank and matrix operations, the relationship with matrix reversibility, the linear correlation with vector groups and the zero relationship of eigenvalue algebra multiples and other properties. The application of the rank of the matrix in linear algebra, analytic geometry, probability theory, etc. is obtained.

MSC 2010

Introduction

The modern concept of the matrix was gradually formed in the 19th century. In 1801, German mathematician Gauss took all the coefficients of a linear transformation. In 1844, German mathematician Eisenstein discussed ‘transformation’ (matrix) and its product. In 1850, the British mathematician Sylvester first used the term matrix. In 1858, British mathematician Gloria published ‘Research Report on Matrix Theory’. He first studied matrices as an independent mathematical object and first published a series of articles on this subject. Therefore, he was considered the founder of matrix theory. He gave a series of definitions that are commonly used now, such as two matrices, equality, zero matrix, sum of two matrices, quantity product of a number and a matrix, product of two matrices, matrix inverse, transposed matrix, etc [1]. Kelley also noticed that matrix multiplications are combinable but generally not commutative and mn matrix can only be multiplied by nk matrix. In 1854, the French mathematician Emilt used the term ‘orthogonal matrix’, but his formal definition was not published by the German mathematician Ferrous until 1878. In 1879, Ferrous introduced the concept of matrix rank. Matrix is an important basic concept in mathematics. It is a main research object of algebra and an important tool for applied mathematics research.

Properties of matrix rank

The maximum order r of a subexpression that is not equal to zero in a matrix A is called the rank of the matrix. If a matrix has no subforums that are not equal to zero, the rank of the matrix is zero, which is called as rank (A) = r.

Invariance of the rank of a matrix

The ranks of the transposed matrices are equal, that is, rank (AT) = rank (A). The elementary transformation does not change the rank of the matrix.

Proof 1

Suppose that the i row and j row of a matrix A = (aij)m×n are exchanged to obtain the matrix B: A=(a11a1nai1ainaj1ajnam1amn),B=(a11a1naj1ajnai1ainam1amn) A = \left({\matrix{{{a_{11}}} & \cdots & {{a_{1n}}} \cr \vdots & {} & \vdots \cr {{a_{i1}}} & \cdots & {{a_{in}}} \cr \vdots & {} & \vdots \cr {{a_{j1}}} & \cdots & {{a_{jn}}} \cr \vdots & {} & \vdots \cr {{a_{m1}}} & \cdots & {{a_{mn}}} \cr}} \right),\;B = \left({\matrix{{{a_{11}}} & \cdots & {{a_{1n}}} \cr \vdots & {} & \vdots \cr {{a_{j1}}} & \cdots & {{a_{jn}}} \cr \vdots & {} & \vdots \cr {{a_{i1}}} & \cdots & {{a_{in}}} \cr \vdots & {} & \vdots \cr {{a_{m1}}} & \cdots & {{a_{mn}}} \cr}} \right) And the rank of matrix A is r. Obviously, the rank of matrix B is also r. Let matrix B have s-order sub-form D, s > r. If D does not contain elements of rows i and j at the same time, then D is an order s of the matrix A and then D = 0; if D contains elements of rows i and j at the same time, this is given as follows: D=|alt1altsajt1ajtsait1ajtsakt1akts|=|alt1altsait1aitsajt1ajtsakt1akts|=0 D = \left| {\matrix{{{a_{l{t_{_1}}}}} & \cdots & {{a_{l{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{j{t_1}}}} & \cdots & {{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}}} & \cdots & {{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{k{t_1}}}} & \cdots & {{a_{k{t_s}}}} \cr}} \right| = \left| {\matrix{{{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}}} & \cdots & {{a_{i{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{j{t_1}}}} & \cdots & {{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{k{t_1}}}} & \cdots & {{a_{k{t_s}}}} \cr}} \right| = 0 This shows that rank (A) ≥ rank (B). And we can also exchange matrix B for rows i and j to get matrix A, then rank (A) ≤ rank (B). So, rank (A) = rank (B). It can be proved that the first elementary transformation does not change the rank of the matrix.

Suppose that row i of matrix A is multiplied by a number k that is not equal to zero to obtain matrix B. Let matrix B have an s-order sub-formula G D, s > r. If D does not contain elements of the i row, then D is an s order sub-formula of the matrix A and then D = 0; if D contains elements of the i throw, then the following is obtained: D=|alt1altskait1kaitsakt1akts|=k|alt1altsait1aitsakt1akts|=0 D = \left| {\matrix{{{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr \vdots & {} & \vdots \cr {k{a_{i{t_1}}}} & \cdots & {k{a_{i{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{k{t_1}}}} & \cdots & {{a_{k{t_s}}}} \cr}} \right| = k\left| {\matrix{{{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}}} & \cdots & {{a_{i{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{k{t_1}}}} & \cdots & {{a_{k{t_s}}}} \cr}} \right| = 0 So, A rank (A) ≥ rank (B). And multiply the i row of matrix B by 1k {1 \over k} to obtain matrix A, then rank (A) ≤ rank (B). So, rank (A) = rank (B). It can be proved that the second elementary transformation does not change the rank of the matrix.

Let's multiply the j row of a matrix A by the number k and add it to the i row to get the matrix B: A=(a11a1nai1ainaj1ajnam1amn)B=(a11a1nai1ain+kajnaj1ajnam1amn) A = \left({\cdots \matrix{{{a_{11}}} & \cdots & {{a_{1n}}} \cr \vdots & {} & \vdots \cr {{a_{i1}}} & \cdots & {{a_{in}}} \cr \vdots & {} & \vdots \cr {{a_{j1}}} & \cdots & {{a_{jn}}} \cr \vdots & {} & \vdots \cr {{a_{m1}}} & \cdots & {{a_{mn}}} \cr}} \right)\;B = \left({\matrix{{{a_{11}}} & \cdots & {{a_{1n}}} \cr \vdots & {} & \vdots \cr {{a_{i1}}} & \cdots & {{a_{in}} + k{a_{jn}}} \cr \vdots & {} & \vdots \cr {{a_{j1}}} & \cdots & {{a_{jn}}} \cr \vdots & {} & \vdots \cr {{a_{m1}}} & \cdots & {{a_{mn}}} \cr}} \right) And the rank of A is r, we have to prove that the rank of B is also r. We first prove that the rank of B cannot exceed r. If the matrix B has no sub-forms of order greater than r, then of course it does not have sub-forms of order greater than r which are not equal to zero; so, its rank obviously cannot exceed r. Let matrix B have s order sub-form D, and s > r. Then, there are three possible situations.

If D does not contain an element in line i. At this time, D is also a sub form of A, and the rank of the matrix A is r, but s > r, from this, D = 0.

If D contains elements in line i and elements in line j. Currently, there is D=|aht1ahtsait1+kajt1aits+kajtsajt1ajtsalt1alts|=|aht1ahtsait1aitsajt1ajtsalt1alts|=0 D = \left| {\matrix{{{a_{h{t_1}}}} & \cdots & {{a_{h{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}} + k{a_{j{t_1}}}} & \cdots & {{a_{i{t_s}}} + k{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{j{t_1}}}} & \cdots & {{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr}} \right| = \left| {\matrix{{{a_{h{t_1}}}} & \cdots & {{a_{h{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}}} & \cdots & {{a_{i{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{j{t_1}}}} & \cdots & {{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr}} \right| = 0

If D contains elements in line i but does not contain elements in line j. Currently D=|aht1ahtsait1+kajt1aits+kajtsalt1alts|=|aht1ahtsait1aitsalt1alts|+k|aht1ahtsajt1ajtsalt1alts|=D1+kD2 D = \left| {\matrix{{{a_{h{t_1}}}} & \cdots & {{a_{h{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}} + k{a_{j{t_1}}}} & \cdots & {{a_{i{t_s}}} + k{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr}} \right| = \left| {\matrix{{{a_{h{t_1}}}} & \cdots & {{a_{h{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{i{t_1}}}} & \cdots & {{a_{i{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr}} \right| + k\left| {\matrix{{{a_{h{t_1}}}} & \cdots & {{a_{h{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{j{t_1}}}} & \cdots & {{a_{j{t_s}}}} \cr \vdots & {} & \vdots \cr {{a_{l{t_1}}}} & \cdots & {{a_{l{t_s}}}} \cr}} \right| = {D_1} + k{D_2}

Since D1 and D2 are a s-order sub-form of the matrix A, D1 = 0, D2 = 0. Thus, D = 0.

From the above three cases, we can see that all the sub-forms of matrix B are greater than r. Therefore, the rank of matrix B is not greater than r. Both rank (A) ≥ rank (B). Similarly, we can also perform elementary transformation on the matrix B to obtain the matrix A so that we can get rank (A) ≤ rank (B). In this way, we prove that, rank (A) = rank (B), the third elementary transformation does not change the rank of the matrix. The above three points prove that the elementary transformation does not change the rank of the matrix. Certificate completed [2, 3].

In fact, performing a row or column elementary transformation is equivalent to multiplying this matrix to the left or right by an invertible matrix. Let A be a m × n matrix: A=(a11a12a1na21a22a2nam1am2amn) A = \left({\matrix{{{a_{11}}} & {{a_{12}}} & \cdots & {{a_{1n}}} \cr {{a_{21}}} & {{a_{22}}} & \cdots & {{a_{2n}}} \cr \vdots & \vdots & {} & \vdots \cr {{a_{m1}}} & {{a_{m2}}} & \cdots & {{a_{mn}}} \cr}} \right) A can be transformed into a staircase type by row elementary transformation and the first column elementary transformation: J=(1*****01****0001**000000000000) J = \left({\matrix{1 & * & * & \cdots & * & * & \cdots & * \cr 0 & 1 & * & \cdots & * & * & \cdots & * \cr \vdots & \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots \cr 0 & 0 & 0 & \cdots & 1 & * & \cdots & * \cr 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \cr \vdots & \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots \cr 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \cr}} \right) And further becomes: (1000c1,r+1c1n0100c2,r+1c2n0001cr,r+1crn000000000000)=(IrCr×(nr)00) \left({\matrix{1 & 0 & 0 & \cdots & 0 & {{c_{1,r + 1}}} & \cdots & {{c_{1n}}} \cr 0 & 1 & 0 & \cdots & 0 & {{c_{2,r + 1}}} & \cdots & {{c_{2n}}} \cr \vdots & \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots \cr 0 & 0 & 0 & \cdots & 1 & {{c_{r,r + 1}}} & \cdots & {{c_{rn}}} \cr 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \cr \vdots & \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots \cr 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \cr}} \right) = \left({\matrix{{{I_r}} & {{C_{r \times \left({n - r} \right)}}} \cr 0 & 0 \cr}} \right) If the elements aij of matrix A are all equal to zero, then A has the form of J. Suppose that aij is not equal to zero. If necessary, swap the rows and columns of the matrix, and the element can be the top left corner of the matrix. Multiply the first line by 1aij {1 \over {{a_{ij}}}} and then subtract the appropriate multiples of the first line from the remaining lines. Matrix A becomes

Proof

B=(1**0**0**) B = \left({\matrix{1 & * & \cdots & * \cr 0 & * & \cdots & * \cr \vdots & \vdots & {} & \vdots \cr 0 & * & \cdots & * \cr}} \right) If in B, except for the first line, the elements of all other lines are zero, then B has the form of J. Suppose there is an element b in the last m − 1 rows of B that is not equal to zero. Change b to the position of the focus of the second row and second column and then use the same method as above to convert B to (1***01**00**00**) \left({\matrix{1 & * & * & \cdots & * \cr 0 & 1 & * & \cdots & * \cr 0 & 0 & * & \cdots & * \cr \vdots & \vdots & \vdots & {} & \vdots \cr 0 & 0 & * & \cdots & * \cr}} \right) . So, continue and finally get a matrix of the form J [4, 5].

We only need to further subtract the appropriate multiples of line r from the first, second, third and line r − 1 and subtract the appropriate multiples of line r − 1 from the first, second and r − 2 lines, respectively. Continue to get a matrix of the form (IrCr,nr00) \left({\matrix{{{I_r}} & {{C_{r,n - r}}} \cr 0 & 0 \cr}} \right) .

In fact, the matrix is transformed into a step shape by elementary transformation, and the rank of the number of non-zero rows in the step shape matrix is the rank of the matrix. This results in another equivalent definition of matrix rank:

The number of non-zero rows in the staircase formed by matrix A = (aij)m×n after elementary transformation becomes the rank of the matrix. The rank of matrix A is r and denoted by R(A) = r. In particular, the rank R(O) = 0 of the zero matrix is 0.

The elementary transformation is used to transform the matrix A into the equivalent standard form I=(Er000) I = \left({\matrix{{{E_r}} & 0 \cr 0 & 0 \cr}} \right) . Since the elementary transformation does not change the rank of the matrix, rank (A) = r. From this, the following theorem is obtained:

Any matrix A can be reduced to the form I=(Er000) I = \left({\matrix{{{E_r}} & 0 \cr 0 & 0 \cr}} \right) , and I is called the equivalent standard form of A, and rank (A) = r. Similar matrices have the same rank, and matrices with the same rank are similar. ABrank (A) = rank (B).

Proof

If the matrix AB is similarly defined, it can be seen that there is an invertible matrix T so that B = T−1 AT because the matrix is multiplied with the invertible matrix, the rank is unchanged (there is a proof below); so, rank (A) = rank (T1 A) = rank (T−1 AT) = rank (B).

If rank (A) = rank (B), suppose the equivalent standard form of A is IA, then the equivalent standard form of AIA and B, is IB; then, BIB, and rank (A) = rank (B); then, IA = IB. From the similar transitivity, we know AB. The proof is complete. If the Jordan canonical form of both matrices A and B is J, then rank (A) = rank (B).

Proof

Set J=(J1JS) J = \left({\matrix{{{J_1}} & {} & {} \cr {} & \ddots & {} \cr {} & {} & {{J_{_S}}} \cr}} \right) , Ji=(λi1λi1λi)ni×ni {J_i} = {\left({\matrix{{{\lambda _i}} & 1 & {} & {} \cr {} & {{\lambda _i}} & \ddots & {} \cr {} & {} & \ddots & 1 \cr {} & {} & {} & {{\lambda _i}} \cr}} \right)_{{n_i} \times {n_i}}} , (i = 1, 2, … s). Then, the elementary factors of the matrix A and the matrix B are both (λλ1)n1,(λλ2)n2,⋯(λλs)ns, so AB. Known from theorem, rank (A) = rank (B). After the proof is obtained, we can get the following:

If the rank of matrix A is r, the rank of its Jordan canonical form is also r. That is, the ranks of the three canonical forms of the matrix are unchanged.

The matrix of contracts has the same rank. That is, if AB, then rank (A) = rank (B).

Proof

If AB, then there is a reversible matrix C such that CT AC = B is known from the property 1.1.1, and rank CT = rank (C), that is, CT is also invertible. From 1.3.4, rank(CTAC)=rank(CTA)=rank(A) rank\left({{C^T}AC} \right) = rank\left({{C^T}A} \right) = rank\left(A \right) If the block matrix A is transformed into a matrix B through the elementary transformation of a finite number of block matrices, the rank of the matrix is unchanged.

Some basic properties of the rank of a matrix

0 ≤ rank (Am×n) ≤ min{m,n}. rank (A) = rank (AT). Matrix A is divided into several rows (columns) to obtain matrix B, then rank (A) ≥ rank (B). Let A be a square matrix of order n(n ≥ 2), then rank(A*)={nrank(A)=n1rank(A)=n10rank(A)<n1 rank\left({{A^*}} \right) = \left\{{\matrix{n \hfill & {rank\left(A \right) = n} \hfill \cr 1 \hfill & {rank\left(A \right) = n - 1} \hfill \cr 0 \hfill & {rank\left(A \right) < n - 1} \hfill \cr}} \right. .

Matrix rank and matrix operations

rank(kA)={rank(A),k00,k=0.rank(A00B)=rank(A)+rank(B).rank(A0CB)rank(A)+rank(B) rank\left({kA} \right) = \left\{{\matrix{{rank\left(A \right),k \ne 0} \hfill \cr {0,k = 0} \hfill \cr}} \right..\,\,rank\left({\matrix{A & 0 \cr 0 & B \cr}} \right) = rank\left(A \right) + rank\left(B \right).\,\,rank\left({\matrix{A & 0 \cr C & B \cr}} \right) \ge rank\left(A \right) + rank\left(B \right) . rank (A × B) ≤ min{rank (A), rank (B)}. In particular, if A is reversible, rank (A × B) = rank (B).

Proof

Let A be a m × n matrix, B be a n × p matrix, and rank (A) = r, let A be the equivalent standard form IA=(Er0r,nr0mr,r0mr,nr) {I_A} = \left({\matrix{{{E_r}} & {{0_{r,n - r}}} \cr {{0_{m - r,r}}} & {{0_{m - r,n - r}}} \cr}} \right) In other words, there is an elementary matrix E1,E2,⋯, Ep of order m and an elementary matrix Ep+1,Ep+2,⋯, Eq of order n such that E1EpAEp+1Eq=IA {E_1} \cdots {E_p}A{E_{p + 1}} \cdots {E_q} = {I_A}

So, have: E1EqAB=E1EpAEp+1EqEp+11Eq1B=IAEp+11Eq1B=IAB1 {E_1} \cdots {E_q}AB = {E_1} \cdots {E_p}A{E_{p + 1}} \cdots {E_q}E_{p + 1}^{- 1} \cdots E_q^{- 1}B = {I_A}E_{p + 1}^{- 1} \cdots E_q^{- 1}B = {I_A}{B_1} Here, B1=Ep+11Eq1 {B_1} = E_{p + 1}^{- 1} \cdots E_q^{- 1} . Obviously, except for the first r lines, IAB1. Obviously, the remaining rows are zero, and so, rank (IAB1) ≤ r. E1 ···EqAB is obtained by AB through elementary transformation; so, they have the same rank, which proves that rank (AB) ≤ rank (A). Similarly, it can be proved that rank (AB) ≤ rank (B). If one of A, B is an invertible matrix, let's set A to be invertible. Then, on the one hand, we know from the above proof process that rank (AB) ≤ rank (B) and B = A−1 (AB) and s rank (B) ≤ rank (AB). Therefore, rank (AB) = rank (B) Certificate completed.

This property is extended to the case of the product of any m matrix. The rank of the product of any m matrices is not greater than the rank of each factor. If matrices A and B are homogeneous, then rank (A ± B) ≤ rank (A) ± rank (B).

Proof

First prove rank (A + B) ≤ rank (A) + rank (B). due to (AB0B)(En0En0)=(A+B0B0) \left({\matrix{A & B \cr 0 & B \cr}} \right)\left({\matrix{{{E_n}} & 0 \cr {{E_n}} & 0 \cr}} \right) = \left({\matrix{{A + B} & 0 \cr B & 0 \cr}} \right) And so: rank(A+B)rank(A+B0B0)=rank((AB0B)(En0En0))rank(AB0B)rank(A)+rank(B). rank\left({A + B} \right) \le rank\left({\matrix{{A + B} & 0 \cr B & 0 \cr}} \right) = rank\left({\left({\matrix{A & B \cr 0 & B \cr}} \right)\left({\matrix{{{E_n}} & 0 \cr {{E_n}} & 0 \cr}} \right)} \right) \le rank\left({\matrix{A & B \cr 0 & B \cr}} \right) \le rank\left(A \right) + rank\left(B \right). and so, rank (A) = rank (AB + B) ≤ rank (AB) + rank (B), and so, rank (A ± B) ≤ rank (A) ± rank (B).

Some inequality equations about the ranks of matrices and their applications

(Sylvester inequality) Let A be the s × n matrix and B be the n × m matrix, then rank (AB) ≥ rank (A) + rank (B) − n

Proof

(Proven using a block matrix) Since (AB00En)br1+A×r2(ABA0En)bc1+bc2×B(0ABEn), \left({\matrix{{AB} & 0 \cr 0 & {{E_n}} \cr}} \right)\buildrel {b{r_1} + A \times {r_2}} \over \longrightarrow \left({\matrix{{AB} & A \cr 0 & {{E_n}} \cr}} \right)\buildrel {b{c_1} + b{c_2} \times B} \over \longrightarrow \left({\matrix{0 & A \cr {- B} & {{E_n}} \cr}} \right), And so: rank(AB00En)rank(0ABEn) rank\left({\matrix{{AB} & 0 \cr 0 & {{E_n}} \cr}} \right) \ge rank\left({\matrix{0 & A \cr {- B} & {{E_n}} \cr}} \right) , which is rank (AB) + nrank (A) + rank (B), get rank (AB) ≥ rank (A) + rank (B) − n. Certificate completed [6, 7].

If the matrices A and B are n × n matrices and AB = 0, the rank (A) + rank (B) ≤ n. (Sylvester inequality) Let A, B and C in turn be m × n, n × s and s × t matrixes, then rank (ABC) ≥ rank (AB) + rank (BC) − rank (B).

Prove: because (ABABCB0)(IsC0It)=(AB0BBC) \left({\matrix{{AB} & {ABC} \cr B & 0 \cr}} \right)\left({\matrix{{{I_s}} & C \cr 0 & {- {I_t}} \cr}} \right) = \left({\matrix{{AB} & 0 \cr B & {BC} \cr}} \right) , with property 1.1.3 available: rank(AB)+rank(BC)rank(AB0BBC)rank(ABABCB0)rank(0ABCB0)=rank(ABC)+rank(B) rank\left({AB} \right) + rank\left({BC} \right) \le rank\left({\matrix{{AB} & 0 \cr B & {BC} \cr}} \right) \le rank\left({\matrix{{AB} & {ABC} \cr B & 0 \cr}} \right) \le rank\left({\matrix{0 & {ABC} \cr B & 0 \cr}} \right) = rank\left({ABC} \right) + rank\left(B \right) Get rank (ABC) ≥ rank (AB) + rank (BC) − rank (B). Certificate completed. Let matrix A, B and B be matrix of order n, then rank (ABIn) = rank (AIn) = rank (BIn).

Because (AInBIn0BIn)(B0In0)=(ABIn0BIn0) \left({\matrix{{A - {I_n}} & {B - {I_n}} \cr 0 & {B - {I_n}} \cr}} \right)\left({\matrix{B & 0 \cr {{I_n}} & 0 \cr}} \right) = \left({\matrix{{AB - {I_n}} & 0 \cr {B - {I_n}} & 0 \cr}} \right) , obtained from properties 1.3.2 and 1.3.4. rank(ABIn)rank(ABIn0BIn0)rank(AInBIn0BIn) rank\left({AB - {I_n}} \right) \le rank\left({\matrix{{AB - {I_n}} & 0 \cr {B - {I_n}} & 0 \cr}} \right) \le rank\left({\matrix{{A - {I_n}} & {B - {I_n}} \cr 0 & {B - {I_n}} \cr}} \right) , and so, rank (ABIn) ≤ rank (AIn) + rank (BIn).

If A, B is n order matrix, then rank (AB + A + B) ≤ rank (A) + rank (B) [8, 9].

Because (AB0B)(B+In0In0)=(AB+A+B0B0) \left({\matrix{A & B \cr 0 & B \cr}} \right)\left({\matrix{{B + {I_n}} & 0 \cr {{I_n}} & 0 \cr}} \right) = \left({\matrix{{AB + A + B} & 0 \cr B & 0 \cr}} \right) , and so rank(AB+A+B)rank(AB+A+B)rank(AB+A+B0B0)rank(AB0B)=rank(A)+rank(B). rank\left({AB + A + B} \right) \le rank\left({AB + A + B} \right) \le rank\left({\matrix{{AB + A + B} & 0 \cr B & 0 \cr}} \right) \le rank\left({\matrix{A & B \cr 0 & B \cr}} \right) = rank\left(A \right) + rank\left(B \right). Assume APn×n, f (x),g(x) ∈ P[x], then: rank (f (A)) + rank (g(A)) = rank (d (A)) + rank (m(A))

Among them, d (x) = (f (x),g(x)), m(x) is the greatest common factor o f (x) and g(x).

If one of f (x), g(x) has zero polyforms, it is obvious that the theorem holds. It may be assumed that f (x), g(x) is a non-zero polynomial.: f (x) = d (x) f1 (x), g(x) = d (x)g1 (x), d (x) = μ (x) f (x) + ν (x)g(x), f1 (x),g1 (x), μ (x),ν (x) ∈ P[x] d (A) = μ (A) f (A) + ν (A)g(A). Elementary transformation of a blocking matrix (f(A)00g(A)) \left({\matrix{{f\left(A \right)} & 0 \cr 0 & {g\left(A \right)} \cr}} \right) . (E0g1(A)E)(Eν(A)0E)(f(A)00g(A))(Eμ(A)0E)(E0f1(A)E)(0EE0)=(E0g1(A)E)(f(A)μ(A)f(A)+ν(A)g(A)0g(A))(E0f1(A)E)(0EE0)=(E0g1(A)E)(f(A)d(A)0g(A))(E0f1(A)E)(0EE0)=(f(A)d(A)g1(A)f(A)0)(E0f1(A)E)(0EE0)=(d(A)00g1(A)f(A))=(d(A)00g1(A)f1(A)d(A))=(d(A)00m(A)) \matrix{{\left({\matrix{E & 0 \cr {- {g_1}\left(A \right)} & E \cr}} \right)\left({\matrix{E & {\nu \left(A \right)} \cr 0 & E \cr}} \right)\left({\matrix{{f\left(A \right)} & 0 \cr 0 & {g\left(A \right)} \cr}} \right)\left({\matrix{E & {\mu \left(A \right)} \cr 0 & E \cr}} \right)\left({\matrix{E & 0 \cr {- {f_1}\left(A \right)} & E \cr}} \right)\left({\matrix{0 & {- E} \cr E & 0 \cr}} \right)} \hfill \cr {= \left({\matrix{E & 0 \cr {- {g_1}\left(A \right)} & E \cr}} \right)\left({\matrix{{f\left(A \right)} & {\mu \left(A \right)f\left(A \right) + \nu \left(A \right)g\left(A \right)} \cr 0 & {g\left(A \right)} \cr}} \right)\left({\matrix{E & 0 \cr {- {f_1}\left(A \right)} & E \cr}} \right)\left({\matrix{0 & {- E} \cr E & 0 \cr}} \right)} \hfill \cr {= \left({\matrix{E & 0 \cr {- {g_1}\left(A \right)} & E \cr}} \right)\left({\matrix{{f\left(A \right)} & {d\left(A \right)} \cr 0 & {g\left(A \right)} \cr}} \right)\left({\matrix{E & 0 \cr {- {f_1}\left(A \right)} & E \cr}} \right)\left({\matrix{0 & {- E} \cr E & 0 \cr}} \right)} \hfill \cr {= \left({\matrix{{f\left(A \right)} & {d\left(A \right)} \cr {- {g_1}\left(A \right)f\left(A \right)} & 0 \cr}} \right)\left({\matrix{E & 0 \cr {- {f_1}\left(A \right)} & E \cr}} \right)\left({\matrix{0 & {- E} \cr E & 0 \cr}} \right)} \hfill \cr {= \left({\matrix{{d\left(A \right)} & 0 \cr 0 & {{g_1}\left(A \right)f\left(A \right)} \cr}} \right) = \left({\matrix{{d\left(A \right)} & 0 \cr 0 & {{g_1}\left(A \right){f_1}\left(A \right)d\left(A \right)} \cr}} \right) = \left({\matrix{{d\left(A \right)} & 0 \cr 0 & {m\left(A \right)} \cr}} \right)} \hfill \cr} Available rank(f(A)00g(A))=rank(d(A)00m(A))=rank(d(A))+rank(m(A)) rank\left({\matrix{{f\left(A \right)} & 0 \cr 0 & {g\left(A \right)} \cr}} \right) = rank\left({\matrix{{d\left(A \right)} & 0 \cr 0 & {m\left(A \right)} \cr}} \right) = rank\left({d\left(A \right)} \right) + rank\left({m\left(A \right)} \right) . Complete [10].

This results in the conditions for the equality of Sylvester and Frobenius inequality:

Assume f (x), g(x) ∈ F (x), (f (x),g(x)) = 1, AFn×n, then rank (f (A))+rank (g(A)) = rank (f (A)g(A))+ n.

Proof

If (f (x),g(x)) = 1, then d (x) = 1, m(x) = f (x)g(x), then rank (d (A)) = rank (E) = n, rank (m(A)) = rank (f (A)g(A)).

With Theorem 1.4.4 rank (f (A)) + rank (g(A)) = rank (f (A)g(A)) + n. Certificate completed.

Assume AFn×n, f (x), g(x) ∈ P[x], (f (x),g(x)) = 1, then rank (f (A))+rank (g(A)) = nf (A)g(A) = 0.

Proof

If rank (f (A))+rank (g(A)) = n, known from Theorem 1.4.4 rank (f (A)g(A)) = 0 and so f (A)g(A) = 0.

If f (A)g(A) = 0, then rank (f (A)g(A)) = 0, then rank (f (A)) + rank (g(A)) = n. Certificate completed.

Assume AFn×n, f (x), g(x), h(x) ∈ F [x], (f (x),h(x)) = 1, then: rank(f(A)g(A))+rank(g(A)h(A))=rank(f(A)g(A)h(A))+rank(g(A)). rank\left({f\left(A \right)g\left(A \right)} \right) + rank\left({g\left(A \right)h\left(A \right)} \right) = rank\left({f\left(A \right)g\left(A \right)h\left(A \right)} \right) + rank\left({g\left(A \right)} \right).

Proof

due to (f (x),h(x)) = 1, and so (f (x)g(x),g(x)h(x)) = g(x), m(x) = f (x)g(x)h(x). From Theorem 1.4.3, we get: rank(f(A)g(A))+rank(g(A)h(A))=rank(f(A)g(A)h(A))+rank(g(A)). rank\left({f\left(A \right)g\left(A \right)} \right) + rank\left({g\left(A \right)h\left(A \right)} \right) = rank\left({f\left(A \right)g\left(A \right)h\left(A \right)} \right) + rank\left({g\left(A \right)} \right). Certificate completed.

Assume AFn×n, fi (x), gj (x) ∈ P[x], and (fi (x),gj (x)) = 1, 1 ≤ im, 1 ≤ jt, then rank(i=1mfi(A))+rank(j=1tgj(A))=n+rank(i=1mfi(A)j=1tgj(A)). rank\left({\prod\limits_{i = 1}^m {f_i}\left(A \right)} \right) + rank\left({\prod\limits_{j = 1}^t {g_j}\left(A \right)} \right) = n + rank\left({\prod\limits_{i = 1}^m {f_i}\left(A \right)\prod\limits_{j = 1}^t {g_j}\left(A \right)} \right).

Proof

due to (fi (x),gj (x)) = 1, then d(A) = E, m(A)=i=1mfi(A)j=1tgj(A) m\left(A \right) = \prod\limits_{i = 1}^m {f_i}\left(A \right)\prod\limits_{j = 1}^t {g_j}\left(A \right) . From Theorem 1.4.3, we get: rank(i=1mfi(A))+rank(j=1tgj(A))=n+rank(i=1mfi(A)j=1tgj(A)). rank\left({\prod\limits_{i = 1}^m {f_i}\left(A \right)} \right) + rank\left({\prod\limits_{j = 1}^t {g_j}\left(A \right)} \right) = n + rank\left({\prod\limits_{i = 1}^m {f_i}\left(A \right)\prod\limits_{j = 1}^t {g_j}\left(A \right)} \right).

Analysis of typical examples:

Example 1

Let A be a matrix of order n and A2 = A, prove that: rank (A) + rank (AE) = n E, is a matrix of order n.

Proof

Order f (x) = x, g(x) = x − 1, then (f (x),g(x)) = 1, and A2 = A, then A2A = 0. So applying the theorem, we get rank(A)+rank(AE)=rank(f(A))+rank(g(A))=n+rank(f(A)g(A))=n+rank(A(AE))=n+rank(A2A)=n+0=n. \matrix{{rank\left(A \right) + rank\left({A - E} \right) = rank\left({f\left(A \right)} \right) + rank\left({g\left(A \right)} \right)} \hfill \cr {= n + rank\left({f\left(A \right)g\left(A \right)} \right) = n + rank\left({A\left({A - E} \right)} \right) = n + rank\left({{A^2} - A} \right) = n + 0 = n.} \hfill \cr}

Example 2

Let A be a matrix of order n and A2 = E, prove rank (A + E) + rank (AE) = n.

Proof

Order f (x) = x + 1, g(x) = x − 1, then (f (x),g(x)) = 1. Applying the theorem, we get rank(A+E)+rank(AE)=n+rank((A+E)(AE))=n+rank(A2E)=n+rank(0)=n+0=n. rank\left({A + E} \right) + rank\left({A - E} \right) = n + rank\left({\left({A + E} \right)\left({A - E} \right)} \right) = n + rank\left({{A^2} - E} \right) = n + rank\left(0 \right) = n + 0 = n.

Example 3

Let AFn×n, n be a positive integer, then for any positive integer l, k, there are:

rank (Al) + rank(AmE)k = n, in case Am+1 = A;

rank(AE)l + rank (Am−1 + Am−2 + ⋯ + A + E)k = n, in case Am = E.

prove:

w f (x) = xl, g(x) = (xm − 1)k, then (f (x),g(x)) = 1. Applying the theorem, rank(Al)+rank(AmE)k=rank(f(A))+rank(g(A))=n+rank(f(A)g(A))=n+rank(Al(AmE)k)=n+rank(Al1(Am+1A)(AmE)k1)=n+rank(Al10(AmE)k)=n+rank(0)=n \matrix{{rank\left({{A^l}} \right) + rank{{\left({{A^m} - E} \right)}^k} = rank\left({f\left(A \right)} \right) + rank\left({g\left(A \right)} \right)} \hfill \cr {= n + rank\left({f\left(A \right)g\left(A \right)} \right) = n + rank\left({{A^l}{{\left({{A^m} - E} \right)}^k}} \right) = n + rank\left({{A^{l - 1}}\left({{A^{m + 1}} - A} \right){{\left({{A^m} - E} \right)}^{k - 1}}} \right)} \hfill \cr {= n + rank\left({{A^{l - 1}}0{{\left({{A^m} - E} \right)}^k}} \right) = n + rank\left(0 \right) = n} \hfill \cr}

make f (x) = (x − 1)l, g(x) = (xm−1 + xm+2 + ⋯ + x + 1)k, then (f (x),g(x)) = 1. Applying Theorem 1.4.4, you can go to: rank(AE)l+rank(Am1+Am2++A+E)k=rank(f(x))+rank(g(x))=n+rank(f(A)g(A))=n+rank((AE)l(Am1+Am2++A+E)k)=n+rank((AE)l1(AE)(Am1+Am+2++A+E)(Am1+Am+2++A+E)k1)=n+rank((AE)l1((Am+Am1++A2+A)(Am1+Am2++A+E))(Am1+Am2+A+E)k1)=n+rank((AE)l1(AmE)(Am1+Am+2++A+E)k1)=n+rank(0)=n \matrix{{rank{{\left({A - E} \right)}^l} + rank{{\left({{A^{m - 1}} + {A^{m - 2}} + \cdots + A + E} \right)}^k} = rank\left({f\left(x \right)} \right) + rank\left({g\left(x \right)} \right)} \hfill \cr {= n + rank\left({f\left(A \right)g\left(A \right)} \right) = n + rank\left({{{\left({A - E} \right)}^l}{{\left({{A^{m - 1}} + {A^{m - 2}} + \cdots + A + E} \right)}^k}} \right)} \hfill \cr {= n + rank\left({{{\left({A - E} \right)}^{l - 1}}\left({A - E} \right)\left({{A^{m - 1}} + {A^{m + 2}} + \cdots + A + E} \right){{\left({{A^{m - 1}} + {A^{m + 2}} + \cdots + A + E} \right)}^{k - 1}}} \right)} \hfill \cr {= n + rank({{\left({A - E} \right)}^{l - 1}}\left({\left({{A^m} + {A^{m - 1}} + \cdots + {A^2} + A} \right) - \left({{A^{m - 1}} + {A^{m - 2}} + \cdots + A + E} \right)} \right)} \hfill \cr {{{\left({{A^{m - 1}} + {A^{m - 2}} + A + E} \right)}^{k - 1}})} \hfill \cr {= n + rank\left({{{\left({A - E} \right)}^{l - 1}}\left({{A^m} - E} \right){{\left({{A^{m - 1}} + {A^{m + 2}} + \cdots + A + E} \right)}^{k - 1}}} \right)} \hfill \cr {= n + rank\left(0 \right) = n} \hfill \cr}

The above three examples are very cumbersome to solve with knowledge of zeroization polynomials, but it is very simple and easy to understand using the Sylvester inequality. Matrix inequality has good applications in solving problems, and this article will not explain them one by one.

The rank and invertibility of matrices

For any matrix n of order A, the following three terms are equivalent (1) matrix A is invertible; (2) rank (A) = n; (3) detA ≠ 0. The row rank, column rank and rank of the matrix are equal. Let A be m × n the order matrix and P be m. Order invertible matrix, Q is n order matrix, then rank (PAQ) = rank (AQ) = rank (PA) = rank (A).

Application of matrix rank in linear algebra

Let n be a linear equation set AX = B, where A=(a11a12a1na21a22a2nam1am2amn),B=(b1b2bm) A = \left({\matrix{{{a_{11}}} & {{a_{12}}} & \cdots & {{a_{1n}}} \cr {{a_{21}}} & {{a_{22}}} & \cdots & {{a_{2n}}} \cr \vdots & \vdots & {} & \vdots \cr {{a_{m1}}} & {{a_{m2}}} & \cdots & {{a_{mn}}} \cr}} \right),\;B = \left({\matrix{{{b_1}} \cr {{b_2}} \cr \vdots \cr {{b_m}} \cr}} \right) Let its augmentation matrix be A¯=(a11a12a1nb1a21a22a2nb2am1am2amnbm) \overline A = \left({\matrix{{{a_{11}}} & {{a_{12}}} & \cdots & {{a_{1n}}} & {{b_1}} \cr {{a_{21}}} & {{a_{22}}} & \cdots & {{a_{2n}}} & {{b_2}} \cr \vdots & \vdots & {} & \vdots & \vdots \cr {{a_{m1}}} & {{a_{m2}}} & \cdots & {{a_{mn}}} & {{b_m}} \cr}} \right) Then, there are the following assumptions: (1) The system of equations AX = B has no solution if and only if rank (A) < rank (Ā); (2) The system of equations AX = B has unique solution if and only if rank (A) < rank (Ā) = n; (3) The system of equations AX = B has infinitely many solutions if and only if rank (A) < rank (Ā) < n.

Proof

Use the elementary transformation indicated by Lemma 1.1.1 above to turn A and Ā into: B=(100c1,r+1c1n010c2,r+1c2n001cr,r+1crn0000000000),B¯=(100c1,r+1c1nd1010c2,r+1c2nd2001cr,r+1crndr00000dr+100000dm) B = \left({\matrix{1 & 0 & \cdots & 0 & {{c_{1,r + 1}}} & \cdots & {{c_{1n}}} \cr 0 & 1 & \cdots & 0 & {{c_{2,r + 1}}} & \cdots & {{c_{2n}}} \cr \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots \cr 0 & 0 & \cdots & 1 & {{c_{r,r + 1}}} & \cdots & {{c_{rn}}} \cr 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \cr \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots \cr 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \cr}} \right),\;\overline B = \left({\matrix{1 & 0 & \cdots & 0 & {{c_{1,r + 1}}} & \cdots & {{c_{1n}}} & {{d_1}} \cr 0 & 1 & \cdots & 0 & {{c_{2,r + 1}}} & \cdots & {{c_{2n}}} & {{d_2}} \cr \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots & \vdots \cr 0 & 0 & \cdots & 1 & {{c_{r,r + 1}}} & \cdots & {{c_{rn}}} & {{d_r}} \cr 0 & 0 & \cdots & 0 & 0 & \cdots & 0 & {{d_{r + 1}}} \cr \vdots & \vdots & {} & \vdots & \vdots & {} & \vdots & \vdots \cr 0 & 0 & \cdots & 0 & 0 & \cdots & 0 & {{d_m}} \cr}} \right) Since the elementary transformation does not change the rank of the matrix, rank(A)=rank(B)=r,rank(A¯)=rank(B¯),rank(A)rank(A¯) rank\left(A \right) = rank\left(B \right) = r,\;rank\left({\overline A} \right) = rank\left({\overline B} \right),\;rank\left(A \right) \le rank\left({\overline A} \right) Now suppose that the system of linear equations AX = B has a solution, that is, there is dr+1 = dr+2 = ⋯ = dm = 0 at this time, and either r < m, or r = m, both of which have rank(A¯)=rank(B¯)=r rank\left({\overline A} \right) = rank\left({\overline B} \right) = r , so rank (A) < rank (Ā) = r. If the system of equations has only one solution, then the number of free unknowns is zero, then r = n.

Conclusion

The rank of a matrix is a basic concept and one of the most important quantitative characteristics of a matrix. It is an invariant under elementary transformation. The rank of a matrix is an important concept that reflects the inherent characteristics of a matrix. It has an important role in linear algebra, analytic geometry and even probability theory.

Ahin R, Yagci O. A New Generalization of Pochhammer Symbol and Its Applications[J]. Applied Mathematics and Nonlinear Sciences, 2020, 5(1):255–266. AhinR YagciO A New Generalization of Pochhammer Symbol and Its Applications[J] Applied Mathematics and Nonlinear Sciences 2020 5 1 255 266 10.2478/amns.2020.1.00023 Search in Google Scholar

Hosamani S M, Shirkol S S. QSPR Analysis of certain Distance Based Topological Indices[J]. Applied Mathematics and Nonlinear Sciences, 2019, 4(2):371–386. HosamaniS M ShirkolS S QSPR Analysis of certain Distance Based Topological Indices[J] Applied Mathematics and Nonlinear Sciences 2019 4 2 371 386 10.2478/AMNS.2019.2.00032 Search in Google Scholar

Pitaval R A, Dai W, Tirkkonen O. Convergence of Gradient Descent for Low-Rank Matrix Approximation[J]. IEEE Transactions on Information Theory, 2015, 61(8):4451–4457. PitavalR A DaiW TirkkonenO Convergence of Gradient Descent for Low-Rank Matrix Approximation[J] IEEE Transactions on Information Theory 2015 61 8 4451 4457 10.1109/TIT.2015.2448695 Search in Google Scholar

Chen, Baiyu, Yang, et al. An algorithm for low-rank matrix factorization and its applications[J]. Neurocomputing, 2018, 275(Jan.31):1012–1020. Chen Baiyu Yang An algorithm for low-rank matrix factorization and its applications[J] Neurocomputing 2018 275 Jan 31 1012 1020 10.1016/j.neucom.2017.09.052 Search in Google Scholar

Parekh A, Selesnick I W. Enhanced Low-Rank Matrix Approximation[J]. IEEE Signal Processing Letters, 2016, 23(4):493–497. ParekhA SelesnickI W Enhanced Low-Rank Matrix Approximation[J] IEEE Signal Processing Letters 2016 23 4 493 497 10.1109/LSP.2016.2535227 Search in Google Scholar

Cen Y G, Zhao R Z, Cen L H, et al. Defect inspection for TFT-LCD images based on the low-rank matrix reconstruction[J]. Neurocomputing, 2015, 149(pt.c):1206–1215. CenY G ZhaoR Z CenL H Defect inspection for TFT-LCD images based on the low-rank matrix reconstruction[J] Neurocomputing 2015 149 (pt.c) 1206 1215 10.1016/j.neucom.2014.09.007 Search in Google Scholar

Zhang X, Hao S, Xu C, et al. Image classification based on low-rank matrix recovery and Naive Bayes collaborative representation[J]. Neurocomputing, 2015, 169(dec.2):110–118. ZhangX HaoS XuC Image classification based on low-rank matrix recovery and Naive Bayes collaborative representation[J] Neurocomputing 2015 169 dec 2 110 118 10.1016/j.neucom.2015.01.085 Search in Google Scholar

Ongie G, Jacob M. A Fast Algorithm for Convolutional Structured Low-Rank Matrix Recovery[J]. IEEE Transactions on Computational Imaging, 2017, 3(4):535. OngieG JacobM A Fast Algorithm for Convolutional Structured Low-Rank Matrix Recovery[J] IEEE Transactions on Computational Imaging 2017 3 4 535 10.1109/TCI.2017.2721819599934429911129 Search in Google Scholar

Ongie G, Jacob M. GIRAF: A Fast Algorithm for Structured Low-Rank Matrix Recovery[J]. IEEE Transactions on Computational Imaging, 2016, PP(99):1–1. OngieG JacobM GIRAF: A Fast Algorithm for Structured Low-Rank Matrix Recovery[J] IEEE Transactions on Computational Imaging 2016 PP(99): 1 1 10.1109/ISBI.2016.7493322 Search in Google Scholar

Li L Z, Zou B J, Zhu C Z. Improved nonconvex optimization model for low-rank matrix recovery[J]. Journal of Central South University, 2015, 22(3):984–991. LiL Z ZouB J ZhuC Z Improved nonconvex optimization model for low-rank matrix recovery[J] Journal of Central South University 2015 22 3 984 991 10.1007/s11771-015-2609-4 Search in Google Scholar

Lopes C R, Soares E G, Santos A, et al. Efeito do alongamento passivo extensivo no desempenho de séries múltiplas no treinamento do força (Effects of passive stretching protocol on multiple sets performance in resistance training)_2015[J]. Computer Applications in Engineering Education, 2015, 17(1):34–43. LopesC R SoaresE G SantosA Efeito do alongamento passivo extensivo no desempenho de séries múltiplas no treinamento do força (Effects of passive stretching protocol on multiple sets performance in resistance training)_2015[J] Computer Applications in Engineering Education 2015 17 1 34 43 Search in Google Scholar

Empfohlene Artikel von Trend MD

Planen Sie Ihre Fernkonferenz mit Scienceendo