We now come to the really interesting part about solving systems in which we can begin to combine some of our previous knowledge of geometry in N-dimensional space with systems in order to deal with situations in which a system does not have a unique solution. All of the systems in this subsection are assumed to exist in
.
We should start by mentioning that two of the techniques used previously (Cramer's rule and
) required the matrix
to be invertible. What does the lack of invertibility mean for a system? Let's first return to the original system in this chapter:
Since this system has a distinct solution for each of its three variables, we say that this system is
completely specified or completely constrained. Its coefficient matrix is guaranteed to be invertible and its three equations are said to be
independent.
But consider another similar system:
In elementary algebra this system was said to be inconsistent and it has no solution. Elimination yields something such as
which is clearly a bogus result. The more common term for such a system is
overspecified or
overconstrained. Notice that the top two equations specify some kind of relationship between the variables x and y. Since the coefficients are multiples of one another but the constants are not, these equations provide contradictory information about the system (i.e., they are inconsistent). Now consider another set of similar equations:
With this system, elimination completely eliminates the second equation. Since the second equation is simply a multiple of the first, it does not add any more information and this system is therefore
underspecified or
underconstrained with an infinite number of solutions. We can say that the first and second equations are
dependent.
Although elementary algebra typically deals only with systems that have unique solutions, linear algebra provides a powerful set of tools to deal with systems that have infinite solutions. At this point, it is important to realize that just because a system has an infinite number of solutions does not mean that it will allow
any solution. It is easy to verify that
is a solution to the system above but
is not. All the solutions to this system are
constrained to lie along an infinitely long line.
Finally, we will consider another system with infinite solutions:
It is immediately obvious that the second two equations are simply multiples of the first. Thus, the only relationship that adds information about this system is
. Since we have assumed we are in
, we know from the chapter on N-space geometry that 1 equation and 3 unknowns is the definition for a plane. The augmented and RREF matrices would have been the following:
If we define the
rank of a matrix as the number of pivotal columns in the matrix, then
. On the other hand, if this system had been in
the solution space would have been a line and the appropriate matrices would have been the following:
Note that
still has a rank of 1 since it still has only 1 pivot. Likewise, in
the solution would have been a 3-dimensional hyperplane with
. Thus, the dimension of the solution space for an
matrix
equals the dimension of the space
minus the number of independent equations:
In the next subsection we will return to these systems to describe the RREF form of their coefficient matrices and the parametric forms of their solutions spaces.