Example 2.1.1. Let us look at some examples of linear systems.
DRAFT
Before we start with the general set up for the linear system of equations, we give different interpretations of the examples considered above.
Thus, there are three ways of looking at the linear system Ax = b, where, as the name suggests, one of the ways is looking at the point of intersection of planes, the other is the vector sum approach and the third is the matrix multiplication approach. All of three approaches are important as they give different insight to the study of matrices. After this chapter, we will see that the last two interpretations form the fundamentals of linear algebra.
Definition 2.1.3. [Linear System] A system of m linear equations in n variables x1,x2,…,xn is a set of equations of the form DRAFT
where for 1 ≤ i ≤ m and 1 ≤ j ≤ n;aij,bi ∈ ℝ. Linear System (2.1.1) is called homogeneous if b1 = 0 = b2 = = bm and non-homogeneous, otherwise.
Definition 2.1.4. [Coefficient and Augmented Matrices] Let A = , x = and b = . Then, (2.1.1) can be re-written as Ax = b. In this setup, the matrix A is called the coefficient matrix and the block matrix is called the augmented matrix of the linear system (2.1.1).
Remark 2.1.5. Consider the linear system Ax = b, where A ∈ Mm,n(ℂ), b ∈ Mm,1(ℂ) and x ∈ Mn,1(ℂ). If [Ab] is the augmented matrix and xT = [x1,…,xn] then,
Definition 2.1.6. [Solution of a Linear System] A solution of Ax = b is a vector y such that Ay indeed equals b. The set of all solutions is called the solution set of the system. For example, the solution set of Ax = b, with A = and b = equals .
Definition 2.1.7. [Consistent, Inconsistent] Consider a linear system Ax = b. Then, this linear system is called consistent if it admits a solution and is called inconsistent if it admits no solution. For example, the homogeneous system Ax = 0 is always consistent as 0 is a solution whereas, verify that the system x + y = 2,2x + 2y = 3 is inconsistent.
Definition 2.1.8. [Associated Homogeneous System] Consider a linear system Ax = b. Then, the corresponding linear system Ax = 0 is called the associated homogeneous system. 0 is always a solution of the associated homogeneous system.
The readers are advised to supply the proof of the next theorem that gives information about the solution set of a homogeneous system.
DRAFT Theorem 2.1.9. Consider a homogeneous linear system Ax = 0.
Solution: Let B0 = [Ab], the augmented matrix. Then, B0 = . We now systematically proceed to get the solution.
The last equation gives z = 1. Using this, the second equation gives y = 1. Finally, the first equation gives x = 1. Hence, the solution set is {[x,y,z]T |[x,y,z] = [1,1,1]}, a unique solution.
In Example 2.1.12, observe how each operation on the linear system corresponds to a similar operation on the rows of the augmented matrix. We use this idea to define elementary row operations and the equivalence of two linear systems.
Definition 2.1.13. [Elementary Row Operations] Let A ∈ Mm,n(ℂ). Then, the elementary row operations are
Definition 2.1.14. [Row Equivalent Matrices] Two matrices are said to be row equivalent if one can be obtained from the other by a finite number of elementary row operations.
Definition 2.1.15. [Row Equivalent Linear Systems] The linear systems Ax = b and Cx = d are said to be row equivalent if their respective augmented matrices, [Ab] and [Cd], are row equivalent.
Thus, note that the linear systems at each step in Example 2.1.12 are row equivalent to each other. We now prove that the solution set of two row equivalent linear systems are same.
Lemma 2.1.16. Let Cx = d be the linear system obtained from Ax = b by application of a single elementary row operation. Then, Ax = b and Cx = d have the same solution set.
Proof. We prove the result for the elementary row operation Ejk(c) with c≠0. The reader is advised to prove the result for the other two elementary operations.
In this case, the systems Ax = b and Cx = d vary only in the jth equation. So, we need to show that y satisfies the jth equation of Ax = b if and only if y satisfies the jth equation of Cx = d. So, let yT = [α1,…,αn]. Then, the jth and kth equations of Ax = b are aj1α1 + + ajnαn = bj and ak1α1 + + aknαn = bk. Therefore, we see that αi’s satisfy
Also, by definition the jth equation of Cx = d equals Therefore, using Equation (2.1.2), we see that yT = [α1,…,αn] is also a solution for Equation (2.1.3). Now, use a similar argument to show that if zT = [β1,…,βn] is a solution of Cx = d then it is also a solution of Ax = b. Hence, the required result follows. _The readers are advised to use Lemma 2.1.16 as an induction step to prove the next result.
Theorem 2.1.17. Let Ax = b and Cx = d be two row equivalent linear systems. Then, they have the same solution set.
In the previous section, we saw that two row equivalent linear systems have the same solution set. Sometimes it helps to imagine an elementary row operation as left multiplication by a suitable matrix. In this section, we will try to understand this relationship and use them to obtain results for linear system. As special cases, we also obtain results that are very useful in the study of square matrices. DRAFT
Definition 2.2.1. [Elementary Matrix] A matrix E ∈ Mn(ℂ) is called an elementary matrix if it is obtained by applying exactly one elementary row operation to the identity matrix In.
Remark 2.2.2. The elementary matrices are of three types and they correspond to elementary row operations.
When an elementary matrix is multiplied on the left of a matrix A, it gives the same result as that of applying the corresponding elementary row operation on A.
Thus, each elementary matrix is invertible. Also, the inverse is an elementary matrix of the same type.
Proposition 2.2.6. Let A and B be two row equivalent matrices. Then, prove that B = E1EkA, for some elementary matrices E1,…,Ek.
Proof. By definition of row equivalence, the matrix B can be obtained from A by a finite number of elementary row operations. But by Remark 2.2.2, each elementary row operation on A corresponds to left multiplication by an elementary matrix to A. Thus, the required result follows. _
We now give an alternate prove of Theorem 2.1.17. To do so, we state the theorem once again.
Theorem 2.2.7. Let Ax = b and Cx = d be two row equivalent linear systems. Then, they have the same solution set.
Proof. Let E1,…,Ek be the elementary matrices such that E1Ek[Ab] = [Cd]. Put E = E1Ek. Then, by Remark 2.2.5
| (2.2.1) |
Now assume that Ay = b holds. Then, by Equation (2.2.1)
| (2.2.2) |
On the other hand if Cz = d holds then using Equation (2.2.1), we have
| (2.2.3) |
Therefore, using Equations (2.2.2) and (2.2.3) the required result follows. _
The following result is a particular case of Theorem 2.2.7.
Corollary 2.2.8. Let A and B be two row equivalent matrices. Then, the systems Ax = 0 and Bx = 0 have the same solution set.
DRAFT
Example 2.2.9. Are the matrices A = and B = row equivalent?
Solution: No, as is a solution of Bx = 0 but it isn’t a solution of Ax = 0.
Definition 2.2.10. [Pivot/Leading Entry] Let A be a nonzero matrix. Then, in each nonzero row of A, the left most nonzero entry is called a pivot/leading entry. The column containing the pivot is called a pivotal column. If aij is a pivot then we denote it by aij. For example, the entries a12 and a23 are pivots in A = . Thus, columns 2 and 3 are pivotal columns.
Definition 2.2.13. [Row-Reduced Echelon Form (RREF)] A matrix C is said to be in row-reduced echelon form (RREF)
A matrix in RREF is also called a row-reduced echelon matrix.
Let A ∈ Mm,n(ℂ). We now present an algorithm, commonly known as the Gauss-Jordan Elimination (GJE), to compute the RREF of A.
Important: The process will stop, as we can get at most min{m,n} pivots.
The proof of the next result is beyond the scope of this book and hence is omitted.
As an immediate corollary, we obtain the following important result.
Proof. Suppose there exists a matrix A with two different RREFs, say B and C. As the RREFs are obtained by left multiplication of elementary matrices, there exist elementary matrices E1,…,Ek and F1,…,Fℓ such that B = E1EkA and C = F1FℓA. Let E = E1Ek and F = F1Fℓ. Thus, B = EA = EF-1C.
As inverse of an elementary matrix is an elementary matrix, F-1 is a product of elementary matrices and hence, B and C are row equivalent. As B and C are in RREF, using Theorem 2.2.17, B = C. _
Remark 2.2.19. Let A ∈ Mm,n(ℂ). DRAFT
Proof. Let E1,…,Ek and F1,…,Fℓ be elementary matrices such that RREF(A) = E1EkA and RREF(B) = F1FℓB. Then,
Proof. By definition, RREF(A) = E1EkA, for certain elementary matrices E1,…,Ek. Take P = E1Ek. Then, P is invertible (product of invertible matrices is invertible) and PA = RREF(A). _
Proof. By Remark 2.2.19.4, there exist an invertible matrix P, such that DRAFT
Example 2.2.20. Consider a linear system Ax = b, where A ∈ M3(ℂ) and A[:,1]≠0 (recall Example 2.1.1.3). Then, verify that the 7 different choices for [Cd] = RREF([Ab]) are
DRAFT Proposition 2.2.21. Let A ∈ Mn(ℂ). Then, A is invertible if and only if RREF(A) = In. That is, every invertible matrix is a product of elementary matrices.
Proof. If RREF(A) = In then In = E1EkA, for some elementary matrices E1,…,Ek. As Ei’s are invertible, E1-1 = E2EkA, E2-1E1-1 = E3EkA and so on. Finally, one obtains A = Ek-1E1-1. A similar calculation now gives AE1Ek = In. Hence, by definition of invertibility A-1 = E1Ek.
Now, let A be invertible with B = RREF(A) = E1EkA, for some elementary matrices E1,…,Ek. As A and Ei’s are invertible, the matrix B is invertible. Hence, B doesn’t have any zero row. Thus, all the n rows of B have pivots. Therefore, B has n pivotal columns. As B has exactly n columns, each column is a pivotal column and hence B = In. Thus, the required result follows. _
As a direct application of Proposition 2.2.21 and Remark 2.2.19.3 one obtains the following.
Proposition 2.2.23. Let A ∈ Mn(ℂ) be an invertible matrix. Then, for any matrix B, define C = and D = . Then, RREF(C) = and RREF(D) = .
Proof. Using matrix product,
For the second part, note that the matrix X = is an invertible matrix. Thus, by Proposition 2.2.21, X is a product of elementary matrices. Now, verify that XD = . As is in RREF, a repeated application of Remark 2.2.19.1 gives the required result. _
As an application of Proposition 2.2.23, we have the following observation.
Let A ∈ Mn(ℂ). Suppose we start with C = [AIn] and compute RREF(C). If RREF(C) = [GH] then,
either G = In or G≠In. Thus, if G = In then we must have F = A-1. If G≠In then, A is not
invertible. We explain this with an example.
Example 2.2.24. Use GJE to find the inverse of A = .
Solution: Applying GJE to [A|I3] = gives
Definition 2.2.26. [Rank of a Matrix] Let A ∈ Mm,n(ℂ). Then, the rank of A, denoted Rank(A), is the number of pivots in the RREF(A). For example, Rank(In) = n and Rank(0) = 0.
Remark 2.2.27. Before proceeding further, for A ∈ Mm,n(ℂ), we observe the following.
We now illustrate the calculation of the rank by giving a few examples.
We now show that the rank doesn’t change if a matrix is multiplied on the left by an invertible matrix.
Proof. By Theorem 2.2.22, RREF(A) = RREF(SA). Hence, Rank(SA) = Rank(A). _
We now have the following result.
Corollary 2.2.30. Let A ∈ Mm,n(ℂ) and B ∈ Mn,q(ℂ). Then, Rank(AB) ≤Rank(A).
In particular, if B ∈ Mn(ℂ) is invertible then Rank(AB) = Rank(A).
Proof. Let Rank(A) = r. Then, there exists an invertible matrix P and A1 ∈ Mr,n(ℂ) such that PA = RREF(A) = . Then, PAB = B = . So, using Lemma 2.2.29 and Remark 2.2.27.2, we get
| (2.2.4) |
In particular, if B is invertible then, using Equation (2.2.4), we get
DRAFT Theorem 2.2.31. Let A ∈ Mm,n(ℂ). If Rank(A) = r then, there exist invertible matrices P and Q such that
Proof. Let C = RREF(A). Then, by Remark 2.2.19.4 there exists as invertible matrix P such that C = PA. Note that C has r pivots and they appear in columns, say i1 < i2 < < ir.
Now, let D = CE1i1E2i2Erir. As Ejij’s are elementary matrices that interchange the columns of C, one has D = , where B ∈ Mr,n-r(ℂ).
Put Q1 = E1i1E2i2Erir. Then, Q1 is invertible. Let Q2 = . Then, verify that Q2 is invertible and
We now prove the following result.
Proposition 2.2.32. Let A ∈ Mn(ℂ) be an invertible matrix. DRAFT
In particular, if B = A[S,:] and C = A[:,T], for some subsets S,T of [n] then Rank(B) = |S| and Rank(C) = |T|.
Proof. Since A is invertible, RREF(A) = In. Hence, by Remark 2.2.19.4, there exists an invertible matrix P such that PA = In. Thus,
For the second part, let us assume that Rank(B1) = t < s. Then, by Remark 2.2.19.4, there exists an invertible matrix Q such that
DRAFT
| (2.2.5) |
for some matrix C, where C is in RREF and has exactly t pivots. Since t < s, QB1 has at least one zero row.
As PA = In, one has AP = In. Hence, = P = AP = In = . Thus,
| (2.2.6) |
Further, using Equations (2.2.5) and (2.2.6), we see that
As a direct corollary of Theorem 2.2.31 and Proposition 2.2.32, we have the following result which improves Corollary 2.2.30.
Corollary 2.2.33. Let A ∈ Mm,n(ℂ). If Rank(A) = r < n then, there exists an invertible matrix Q and B ∈ Mm,r(ℂ) such that AQ = , where Rank(B) = r.
Proof. By Theorem 2.2.31, there exist invertible matrices P and Q such that PAQ = . If P-1 = , where B ∈ Mm,r(ℂ) and C ∈ Mm,m-r(ℂ) then,
As an application of Corollary 2.2.33, we have the following result.
Proof. Let Rank(B) = r. Then, by Corollary 2.2.33, there exists an invertible matrix Q and a matrix C ∈ Mn,r(ℂ) such that BQ = and Rank(C) = r. Hence, ABQ = A = . Thus, using Corollary 2.2.30 and Remark 2.2.27.2, we get
We end this section by relating the rank of the sum of two matrices with sum of their ranks.
Proposition 2.2.35. Let A,B ∈ Mm,n(ℂ). Then, prove that Rank(A + B) ≤ Rank(A) + Rank(B). In particular, if A = ∑ i=1kxiyi*, for some xi,yi ∈ ℂ, for 1 ≤ i ≤ k, then Rank(A) ≤ k.
Proof. Let Rank(A) = r. Then, there exists an invertible matrix P and a matrix A1 ∈ Mr,n(ℂ) such that PA = RREF(A) = . Then,
Definition 2.2.37. [Basic, Free Variables] Consider the linear system Ax = b. If RREF([Ab]) = [Cd]. Then, the variables corresponding to the pivotal columns of C are called the basic variables and the variables that are not basic are called free variables.
We now prove the main result in the theory of linear systems. Before doing so, we look at the following example.
Example 2.2.39. Consider a linear system Ax = b. Suppose RREF([Ab]) = [Cd], where
Therefore, the solution set is given by
Theorem 2.2.40. Let Ax = b be a linear system in n variables with RREF([Ab]) = [Cd] with Rank(A) = r and Rank([Ab]) = ra.
Proof. Part 1: As r < ra, by Remark 2.2.19.5 ([Cd])[r + 1,:] = [0T 1]. Note that this row corresponds to the linear equation
Part 2: As r = ra, by Remark 2.2.19.5, [Cd] doesn’t have a row of the form [0T 1]. Further, the number of pivots in [Cd] and that in C is same, namely, r pivots. Suppose the pivots appear in columns i1,…,ir with 1 ≤ i1 < < ir ≤ n. Thus, the variables xij, for 1 ≤ j ≤ r, are basic variables and the remaining n-r variables, say xt1,…,xtn-r, are free variables with t1 < < tn-r. Since C is in RREF, in terms of the free variables and basic variables, the ℓ-th row of [Cd], for 1 ≤ ℓ ≤ r, corresponds to the equation
DRAFT
| (2.2.7) |
Part 2a: As r = n, there are no free variables. Hence, xi = di, for 1 ≤ i ≤ n, is the unique solution.
Part 2b: Define x0 = and u1 = ,…,un-r = . Then, it can be easily verified that Ax0 = b and, for 1 ≤ i ≤ n - r, Aui = 0. Also, by Equation (2.2.7) the solution set has indeed the required form, where ki corresponds to the free variable xti. As there is at least one free variable the system has infinite number of solutions. Thus, the proof of the theorem is complete. _
Exercise 2.2.41. Consider the linear system given below. Use GJE to find the RREF of it’s augmented matrix. Now, use the technique used in the previous theorem to find the solution of the linear system DRAFT
Let A ∈ Mm,n(ℂ). Then, Rank(A) ≤ m. Thus, using Theorem 2.2.40 the next result follows.
Corollary 2.2.42. Let A ∈ Mm,n(ℂ). If Rank(A) = r < min{m,n} then Ax = 0 has infinitely many solutions. In particular, if m < n, then Ax = 0 has infinitely many solutions. Hence, in either case, the homogeneous system Ax = 0 has at least one non-trivial solution.
Remark 2.2.43. Let A ∈ Mm,n(ℂ). Then, Theorem 2.2.40 implies that Ax = b is consistent if and only if Rank(A) = Rank([Ab]). Further, the vectors associated to the free variables in Equation (2.2.7) are solutions to the associated homogeneous system Ax = 0.
We end this subsection with some applications.
In this section the coefficient matrix of the linear system Ax = b will be a square matrix. We start with proving a few equivalent conditions that relate different ideas.
Proof. 1 ⇔2 Already done in Proposition 2.2.21.
2 ⇔3 Again, done in Proposition 2.2.21. DRAFT
3 4 Let A = E1Ek, for some elementary matrices E1,…,Ek. Then, by previous equivalence A is invertible. So, A-1 exists and A-1A = In. Hence, if x0 is any solution of the homogeneous system Ax = 0 then,
4 5 Let if possible Rank(A) = r < n. Then, by Corollary 2.2.42, the homogeneous system Ax = 0 has infinitely many solution. A contradiction. Thus, A has full rank.
5 2 Suppose Rank(A) = n. So, RREF(A) has n pivotal columns. But, RREF(A) has exactly n columns and hence each column is a pivotal column. Thus, RREF(A) = In. _
We end this section by giving two more equivalent conditions for a matrix to be invertible.
Proof. 1 2 Note that x0 = A-1b is the unique solution of Ax = b.
2 3 The system is consistent as Ax = b has a solution.
3 1 For 1 ≤ i ≤ n, define eiT = In[i,:]. By assumption, the linear system Ax = ei has a solution, say xi, for 1 ≤ i ≤ n. Define a matrix B = [x1,…,xn]. Then,
We now give an immediate application of Theorem 2.3.2 and Theorem 2.3.1 without proof.
As an immediate consequence of Theorem 2.3.1, the readers should prove that one needs to compute either the left or the right inverse to prove invertibility of A ∈ Mn(ℂ).
In this section, we associate a number with each square matrix. To start with, recall the notations used in Section 1.3.1. Then, for A = , A(1|2) = and A({1,2}|{1,3}) = [4].
With the notations as above, we are ready to give an inductive definition of the determinant of a square matrix. The advanced students can find an alternate definition of the determinant in Appendix 9.2.22, where it is proved that the definition given below corresponds to the expansion of determinant along the first row.
Definition 2.3.6. Let A be a square matrix of order n. Then, the determinant of A, denoted det(A) (or |A|) is defined by
DRAFT Definition 2.3.9. [Singular, Non-Singular Matrices] A matrix A is said to be a singular if det(A) = 0 and is called non-singular if det(A)≠0.
The next result relates the determinant with row operations. For proof, see Appendix 9.3.
Theorem 2.3.10. Let A be an n × n matrix.
As det(In) = 1, we have the following result.
Example 2.3.12. Since , using Theorem 2.3.10, we see that, for A = , det(A) = 2 ⋅ (1 ⋅ 2 ⋅ (-1)) = -4, where the first 2 appears from the elementary matrix E1().
Exercise 2.3.13. Prove the following without computing the determinant (use Theorem 2.3.10).
By Theorem 2.3.10.6 det(In) = 1. The next result about the determinant of elementary matrices is an immediate consequence of Theorem 2.3.10 and hence the proof is omitted.
Remark 2.3.14. Theorem 2.3.10.1 implies that the determinant can be calculated by expanding along any row. Hence, the readers are advised to verify that DRAFT
Example 2.3.15. Using Remark 2.3.14, one has
= (-1)2+3 ⋅ 2 ⋅ + (-1)2+4 ⋅ = -2 ⋅ 1 + (-8) = -10.
Definition 2.3.16. Let A ∈ Mn(ℂ). Then, the cofactor matrix, denoted Cof(A), is an Mn(ℂ) matrix with Cof(A) = [Cij], where
The next result relates adjugate matrix with the inverse, in case det(A)≠0.
Proof. Part 1: It follows directly from Remark 2.3.14 and the definition of the cofactor.
Part 2: Fix positive integers i,ℓ with 1 ≤ i≠ℓ ≤ n and let B = [bij] be a square matrix with B[ℓ,:] = A[i,:] and B[t,:] = A[t,:], for t≠ℓ. As ℓ≠i, B[ℓ,:] = B[i,:] and thus, by Theorem 2.3.10.5, det(B) = 0. As A(ℓ|j) = B(ℓ|j), for 1 ≤ j ≤ n, using Remark 2.3.14
This completes the proof of Part 2.Part 3: Using Equation (2.3.2) and Remark 2.3.14, observe that
Let A be a non-singular matrix. Then, by Theorem 2.3.18.3, A-1 = Adj(A). Thus AAdj(A) = Adj(A)A = det(A)In and this completes the proof of the next result
The next result gives another equivalent condition for a square matrix to be invertible.
Proof. Let A be non-singular. Then, det(A)≠0 and hence A-1 = Adj(A).
Now, let us assume that A is invertible. Then, using Theorem 2.3.1, A = E1Ek, a product of elementary matrices. Also, by Corollary 2.3.11, det(Ei)≠0, for 1 ≤ i ≤ k. Thus, a repeated application of Parts 1,2 and 3 of Theorem 2.3.10 gives det(A)≠0. _
The next result relates the determinant of a matrix with the determinant of its transpose. Thus, the determinant can be computed by expanding along any column as well.
Proof. If A is a non-singular, Corollary 2.3.20 gives det(A) = det(AT ).
If A is singular then, by Theorem 2.3.21, A is not invertible. So, AT is also not invertible and hence by Theorem 2.3.21, det(AT ) = 0 = det(A). _
The next result relates the determinant of product of two matrices with their determinants.
Proof. Case 1: Let A be non-singular. Then, by Theorem 2.3.18.3, A is invertible and by Theorem 2.3.1, A = E1Ek, a product of elementary matrices. Thus, a repeated application of Parts 1,2 and 3 of Theorem 2.3.10 gives the desired result as
Case 2: Let A be singular. Then, by Theorem 2.3.21 A is not invertible. So, by Proposition 2.2.21 there exists an invertible matrix P such that PA = . So, A = P-1. As P is invertible, using Part 1, we have
Example 2.3.24. Let A be an orthogonal matrix then, by definition, AAT = I. Thus, by Theorems 2.3.23 and 2.3.22
Let A be a square matrix. Then, combining Theorem 2.3.2 and Theorem 2.3.21, one has the following result.
Thus, Ax = b has a unique solution for every b if and only if det(A)≠0. The next theorem gives a direct method of finding the solution of the linear system Ax = b when det(A)≠0.
Theorem 2.3.27 (Cramer’s Rule). Let A be an n × n non-singular matrix. Then, the unique solution of the linear system Ax = b with xT = [x1,…,xn] is given by DRAFT
Proof. Since det(A)≠0, A is invertible. Thus, there exists an invertible matrix P such that PA = In and P[A|b] = [I|Pb]. Then A-1 = P. Let d = Pb = A-1b. Then, Ax = b has the unique solution xj = dj, for 1 ≤ j ≤ n. Also, [e1,…,en] = I = PA = [PA[:,1],…,PA[:,n]]. Thus,
Example 2.3.28. Solve Ax = b using Cramer’s rule, where A = and b = .
Solution: Check that det(A) = 1 and xT = [-1,1,0] as
In this chapter, we started with a system of m linear equations in n variables and formally wrote it as Ax = b and in turn to the augmented matrix [A|b]. Then, the basic operations on equations led to multiplication by elementary matrices on the right of [A|b]. These elementary matrices are invertible and applying the GJE on a matrix A, resulted in getting the RREF of A. We used the pivots in RREF matrix to define the rank of a matrix. So, if Rank(A) = r and Rank([A|b]) = ra
We have also seen that the following conditions are equivalent for A ∈ Mn(ℂ). DRAFT
So, overall we have learnt to solve the following type of problems: