The third row of A is the sum of its first and second rows, so we know that if Ax=b the third component of b equals the sum of its first and second components. If b does not satisfy b3=b1+b2 the system has no solution. If a combination of the rows of A gives the zero row, then the same combination of the entries of b must equal zero.
One way to find out whether Ax=b is solvable is to use elimination on the augmented matrix. If a row of A is completely eliminated, so is the corresponding entry in b. In our example, row 3 of A is completely eliminated:
If Ax=b has a solution, then b3−b2−b1=0. For example, we could choose b=156.
From an earlier lecture, we know that Ax=b is solvable exactly when b is in the column space C(A). We have these two conditions on b; in fact they are equivalent.
In order to find all solutions to Ax=b we first check that the equation is solvable, then find a particular solution. We get the complete solution of the equation by adding the particular solution to all the vectors in the nullspace.
The general solution to Ax=b is given by xcomplete=xp+xn, where xn is a generic vector i n the nullspace. To see this, we add Axp=b to Axn=0 and get A(xp+xn)=b for every vector xn in the nullspace.
Last lecture we learned that the nullspace of A is the collection of all combinations of the special solutions −2100 and 20−21. So the complete solution to the equation Ax=156 is:
xcomplete=−203/20+−2100+c220−21,
where c1 and c2 are real numbers.
The nullspace of A is a two dimensional subspace of R4 , and the solutions to the equation Ax=b form a plane parallel to that through xp=−203/20.
If r=n , then from the previous lecture we know that the nullspace has dimension n−r=0 and contains only the zero vector. There are no free variables or special solutions.
If Ax=b has a solution, it is unique; there is either 0 or 1 solution. Examples like this, in which the columns are independent, are common in applications.
We know r≤m, so if r=n the number of columns of the matrix is less than or equal to the number of rows. The row reduced echelon form of the matrix will look like R=[I0] For any vector b in Rm that’s not a linear combination of the columns of Aˉ , there is no solution to Ax=b .
If r=m., then the reduced matrix R=[IF] has no rows of zeros and so there are no requirements for the entries of b to satisfy. The equation Ax=b is solvable for every b . There are n−r=n−m free variables, so there are n−m special solutions to Ax=0 .
If r=m=n is the number of pivots of A , then A is an invertible square matrix and R is the identity matrix. The nullspace has dimension zero, and Ax=b has a unique solution for every b in Rm .
Problem 8.1: (3.4 #13.(a,b,d) Introduction to Linear Algebra: Strang) Explain why these are all false:
a) The complete solution is any linear combination of xp and xn .
b) The system Ax=b has at most one particular solution.
c) If A is invertible there is no solution xn in the nullspace.
Solution
a) The coefficient of xp must be one.
b) If xn∈N(A) is in the nullspace of A and xp is one particular solution, then xp+xn is also a particular solution.
c) There’s always xn=0 .
Problem 8.2: (3.4 #28.) Let
U=[102034]andc=[58].
Use Gauss-Jordan elimination to reduce the matrices [U0] and [Uc] to [R0] and [Rd] . Solve Rx=0 and Rx=d .
Check your work by plugging your values into the equations Ux=0 and Ux=c .
Finally, we check that this is the correct solution by plugging it into the equation Ux=c :
[102034]−312=[58]✓ˇ
Problem 8.3: (3.4 #36.) Suppose Ax=b and Cx=b have the same (complete) solutions for every b. Is it true that A=C?
Solution
Yes. In order to check that A=C as matrices, it is enough to check that Ay=Cy for all vectors y of the correct size (or just for the standard basis vectors, since multiplication by them “picks out the columns”). So let y be any vector of the correct size, and set b=Ay . Then y is certainly a solution to Ax=b, , and so by our hypothesis must also be a solution to Cx=b, ; in other words, Cy=b=Ay.