- Last updated
- Save as PDF
- Page ID
- 82476
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)
\( \newcommand{\vectorC}[1]{\textbf{#1}}\)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}}\)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}\)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
In the previous section, we looked at systems of linear equations from a graphical perspective. Since the equations had only two or three unknowns, we could study the solution spaces as the intersections of lines and planes.
Remembering that we will eventually consider many more equations and unknowns, this will, in general, not be a useful strategy. Instead, we will approach this problem algebraically and develop a technique to understand the solution spaces of general systems of linear equations.
Gaussian elimination
We will develop an algorithm, which is usually called Gaussian elimination, that allows us to describe the solution space to a system of linear equations.
Preview Activity 1.2.1.
Let's begin by considering some simple examples that will guide us in finding a more general approach.
- Give a description of the solution space to the linear system:
\begin{equation*} \begin{alignedat}{3} x & & & {}={} & 2 \\ & & y & {}={} & -1. \\ \end{alignedat} \end{equation*}
- Give a description of the solution space to the linear system:
\begin{equation*} \begin{alignedat}{4} -x & {} + {} & 2y & {}-{} & z & {}={} & -3 \\ & & 3y & {}+{} & z & {}={} & -1. \\ & & & & 2z & {}={} & 4. \\ \end{alignedat} \end{equation*}
- Give a description of the solution space to the linear system:
\begin{equation*} \begin{alignedat}{3} x & {} + {} & 2y & {}={} & 2 \\ 2x& {}+{} & 2y & {}={} & 0. \\ \end{alignedat} \end{equation*}
- Describe the solution space to the linear equation \(0x = 0\text{.}\)
- Describe the solution space to the linear equation \(0x = 5\text{.}\)
As the examples in this preview activity provide some motivation for the general approach we will develop, we wish to call particular attention to two of the examples.
Observation 1.2.1.
Let's look more carefully at two examples.
- First, finding the solution space to some systems is simple. For instance, each equation in the following system
\begin{equation*} \begin{alignedat}{3} x & & & {}={} & 2 \\ & & y & {}={} & -1. \\ \end{alignedat} \end{equation*}
has only one unknown so we can see that there is exactly one solution, which is \((x,y) = (2,-1)\text{.}\) We call such a system decoupled. - Second, we may operate on a linear system transforming it into a new system that has the same solution space. For instance, given the system
\begin{equation*} \begin{alignedat}{4} -x & {} + {} & 2y & {}-{} & z & {}={} & -3 \\ & & 3y & {}+{} & z & {}={} & -1. \\ & & & & 2z & {}={} & 4, \\ \end{alignedat} \end{equation*}
we may multiply the third equation by \(1/2\) to obtain
\begin{equation*} \begin{alignedat}{4} -x & {} + {} & 2y & {}-{} & z & {}={} & -3 \\ & & 3y & {}+{} & z & {}={} & -1. \\ & & & & z & {}={} & 2. \\ \end{alignedat} \end{equation*}
Any solution to this system of equations must then have \(z=2\text{.}\)
Once we know that, we may substitute \(z=2\) into the first and second equation and simplify to obtain a new system of equations having the same solutions:
\begin{equation*} \begin{alignedat}{4} -x & {} + {} & 2y & {}{} & & {}={} & -1 \\ & & 3y & {}{} & & {}={} & -3. \\ & & & & z & {}={} & 2. \\ \end{alignedat} \end{equation*}
See Also1.3: Systems of Linear Equations - Two Variables1.4: Systems of Linear Equations with Three VariablesSystem Of Linear Equations3.1: Solve Systems of Linear Equations with Two VariablesContinuing in this way, we eventually obtain a decoupled system showing that there is exactly one solution, which is \((x,y,z)=(-1,-1,2)\text{.}\)
Our original system,
\begin{equation*} \begin{alignedat}{4} -x & {} + {} & 2y & {}-{} & z & {}={} & -3 \\ & & 3y & {}+{} & z & {}={} & -1 \\ & & & & 2z & {}={} & 4, \\ \end{alignedat} \end{equation*}
is called a triangular system due to the shape formed by the coefficients. As this example demonstrates, triangular systems are easily solved by a process called back substitution.
Let's look at the process of substitution a little more carefully. A natural approach to the system
\begin{equation*} \begin{alignedat}{3} x & {} + {} & 2y & {}={} & 2 \\ 2x& {}+{} & 2y & {}={} & 0. \\ \end{alignedat} \end{equation*}
is to use the first equation to express \(x\) in terms of \(y\text{:}\)
\begin{equation*} x = 2-2y \end{equation*}
and then substitute this into the second equation and simplify:
\begin{equation*} \begin{alignedat}{2} 2x + 2y & {}= & 0 \\ 2(2-2y) + 2y & {}={} & 0 \\ 4-4y + 2y & {}={} & 0 \\ -2y & {}={} & -4. \\ \end{alignedat} \end{equation*}
The two-step process of solving for \(x\) and substituting into the second equation may be performed more efficiently by adding a multiple of the first equation to the second. More specifically, we multiply the first equation by -2 and add to the second equation
\begin{equation*} \begin{array}{cr} & -2(\text{equation 1}) \\ + & \text{equation 2} \\ \hline \end{array} \end{equation*}
to obtain
\[\begin{equation*} \begin{array}{cr} & -2(x+2y=2) \\ + & 2x+2y = 0 \\ \hline \\ \end{array} \end{equation*}\]
which gives us
\[\begin{equation*} \begin{array}{crcr} & -2x-4y & = & -4 \\ + & 2x+2y & = & 0 \\ \hline & -2y & = & -4. \\ \end{array} \end{equation*}\]
In this way, the system
\begin{equation*} \begin{alignedat}{3} x & {} + {} & 2y & {}={} & 2 \\ 2x& {}+{} & 2y & {}={} & 0. \\ \end{alignedat} \end{equation*}
is transformed into the system
\begin{equation*} \begin{alignedat}{3} x & {} + {} & 2y & {}={} & 2 \\ & & -2y & {}={} & -4, \\ \end{alignedat} \end{equation*}
which has the same solution space. Of course, the choice to multiply the first equation by -2 was made so that terms involving \(x\) in the two equations will cancel when added. Notice that this operation transforms our original system into a triangular one; we may now perform back substitution to arrive at a decoupled system.
Based on these observations, we take note of three operations that transform a system of linear equations into a new system of equations having the same solution space. Our goal is to create a new system whose solution space is the same as the original system's and may be easily described.
- Scaling
-
We may multiply one equation by a nonzero number. For instance,
\[ \begin{equation*} 2x -4y = 6 \end{equation*}\]
has the same set of solutions as
\[ \begin{equation*} \frac12(2x-4y=6) \end{equation*}\]
or
\[ \begin{equation*} x-2y=3\text{.} \end{equation*}\]
- Interchange
- Interchanging equations will not change the set of solutions. For instance,
\[ \begin{equation*} \begin{alignedat}{3} 2x & {}+{} & 4y & {}={} & 1 \\ x & {}-{} & 3y & {}={} & 0 \\ \end{alignedat} \end{equation*} \]
has the same set of solutions as\[ \begin{equation*} \begin{alignedat}{3} x & {}-{} & 3y & {}={} & 0 \\ 2x & {}+{} & 4y & {}={} & 1. \\ \end{alignedat} \end{equation*}\]
- Replacement
-
As we saw above, we may multiply one equation by a real number and add it to another equation. We call this process replacement.
Example 1.2.2
Let's illustrate the use of these operations to find the solution space to the system of equations:
\begin{equation*} \begin{alignedat}{4} x & {}+{} & 2y & & & {}={} & 4 \\ 2x & {}+{} & y & {}-{} & 3z & {}={} & 11 \\ -3x & {}-{} & 2y & {}+{} & z & {}={} & -10 \\ \end{alignedat} \end{equation*}
We will first transform the system into a triangular system so we start by eliminating \(x\) from the second and third equations.
We begin with a replacement operation where we multiply the first equation by -2 and add the result to the second equation.
\begin{equation*} \begin{alignedat}{4} x & {}+{} & 2y & & & {}={} & 4 \\ & & -3y & {}-{} & 3z & {}={} & 3 \\ -3x & {}-{} & 2y & {}+{} & z & {}={} & -10 \\ \end{alignedat} \end{equation*}
Scale the second equation by multiplying it by \(-1/3\text{.}\)
\begin{equation*} \begin{alignedat}{4} x & {}+{} & 2y & & & {}={} & 4 \\ & & y & {}+{} & z & {}={} & -1 \\ -3x & {}-{} & 2y & {}+{} & z & {}={} & -10 \\ \end{alignedat} \end{equation*}
Another replacement operation eliminates \(x\) from the third equation. We multiply the first equation by 3 and add to the third.
\begin{equation*} \begin{alignedat}{4} x & {}+{} & 2y & & & {}={} & 4 \\ & & y & {}+{} & z & {}={} & -1 \\ & & 4y & {}+{} & z & {}={} & 2 \\ \end{alignedat} \end{equation*}
Eliminate \(y\) from the third equation by multiplying the second equation by -4 and adding it to the third.
\begin{equation*} \begin{alignedat}{4} x & {}+{} & 2y & & & {}={} & 4 \\ & & y & {}+{} & z & {}={} & -1 \\ & & & & -3z & {}={} & 6 \\ \end{alignedat} \end{equation*}
After scaling the third equation by \(-1/3\text{,}\) we have found the value for \(z\text{.}\)
\begin{equation*} \begin{alignedat}{4} x & {}+{} & 2y & & & {}={} & 4 \\ & & y & {}+{} & z & {}={} & -1 \\ & & & & z & {}={} & -2 \\ \end{alignedat} \end{equation*}
The system now has a triangular form so we will begin the process of back substitution by multiplying the third equation by -1 and adding to the second.
\begin{equation*} \begin{alignedat}{4} x & {}+{} & 2y & & & {}={} & 4 \\ & & y & & & {}={} & 1 \\ & & & & z & {}={} & -2 \\ \end{alignedat} \end{equation*}
Finally, multiply the second equation by -2 and add to the first to obtain:
\begin{equation*} \begin{alignedat}{4} x & & & & & {}={} & 2 \\ & & y & & & {}={} & 1 \\ & & & & z & {}={} & -2 \\ \end{alignedat} \end{equation*}
Now that we have arrived at a decoupled system, we know that there is exactly one solution to our original system of equations, which is \((x,y,z) = (2,1,-2)\text{.}\)
One could find the same result by applying a different sequence of replacement and scaling operations. However, we chose this particular sequence guided by our desire to first transform the system into a triangular one. To do this, we eliminated the first unknown \(x\) from all but one equation and then proceeded to the next unknowns working left to right. Once we had a triangular system, we used back substitution moving through the unknowns right to left.
We call this process Gaussian elimination and note that it is our primary tool for solving systems of linear equations.
Activity 1.2.2. Gaussian Elimination.
Use Gaussian elimination to describe the solutions to the following systems of linear equations.
- Does the following linear system have exactly one solution, infinitely many solutions, or no solutions?
\begin{equation*} \begin{alignedat}{4} x & {}+{} & y & {}+{} & 2z & {}={} & 1 \\ 2x & {}-{} & y & {}-{} & 2z & {}={} & 2 \\ -x & {}+{} & y & {}+{} & z & {}={} & 0 \\ \end{alignedat} \end{equation*}
- Does the following linear system have exactly one solution, infinitely many solutions, or no solutions?
\begin{equation*} \begin{alignedat}{4} -x & {}-{} & 2y & {}+{} & 2z & {}={} & -1 \\ 2x & {}+{} & 4y & {}-{} & z & {}={} & 5 \\ x & {}+{} & 2y & & & {}={} & 3 \\ \end{alignedat} \end{equation*}
- Does the following linear system have exactly one solution, infinitely many solutions, or no solutions?
\begin{equation*} \begin{alignedat}{4} -x & {}-{} & 2y & {}+{} & 2z & {}={} & -1 \\ 2x & {}+{} & 4y & {}-{} & z & {}={} & 5 \\ x & {}+{} & 2y & & & {}={} & 2 \\ \end{alignedat} \end{equation*}
Augmented matrices
After performing Gaussian elimination a few times, you probably noticed that you spent most of the time concentrating on the coefficients and simply recorded the unknowns as place holders. For convenience, we will therefore introduce a shorthand description of linear systems.
When writing a linear system, we always write the unknowns in the same order in each equation. We then construct an augmented matrix by simply forgetting about the unknowns and recording the numerical data in a rectangular array. For instance, the system of equations below has the following augmented matrix
\begin{equation*} \begin{alignedat}{4} -x & {}-{} & 2y & {}+{} & 2z & {}={} & -1 \\ 2x & {}+{} & 4y & {}-{} & z & {}={} & 5 \\ x & {}+{} & 2y & & & {}={} & 3 \\ \end{alignedat} \end{equation*}
\begin{equation*} \left[ \begin{array}{rrr|r} -1 & -2 & 2 & -1 \\ 2 & 4 & -1 & 5 \\ 1 & 2 & 0 & 3 \\ \end{array} \right]. \end{equation*}
The vertical line reminds us where the equals signs appear in the equations. Entries to the left corresponds to coefficients of the equations. We will sometimes choose to focus only on the coefficients of the system in which we case we write the coefficient matrix as
\begin{equation*} \left[ \begin{array}{rrr} -1 & -2 & 2 \\ 2 & 4 & -1 \\ 1 & 2 & 0 \\ \end{array} \right]. \end{equation*}
The three operations we perform on systems of equations translate naturally into operations on matrices. For instance, the replacement operation that multiplies the first equation by 2 and adds it to the second may be recorded as
\begin{equation*} \left[ \begin{array}{rrr|r} -1 & -2 & 2 & -1 \\ 2 & 4 & -1 & 5 \\ 1 & 2 & 0 & 3 \\ \end{array} \right] \sim \left[ \begin{array}{rrr|r} -1 & -2 & 2 & -1 \\ 0 & 0 & 3 & 3 \\ 1 & 2 & 0 & 3 \\ \end{array} \right]. \end{equation*}
The symbol \(\sim\) between the matrices indicates that the two matrices are related by a sequence of scaling, interchange, and replacement operations. Since these operations act on the rows of the matrices, we say that the matrices are row equivalent.
Activity 1.2.3. Augmented matrices and solution spaces.
- Write the augmented matrix for the system of equations
\begin{equation*} \begin{alignedat}{4} x & {}+{} & 2y & {}-{} & z & {}={} & 1 \\ 3x & {}+{} & 2y & {}+{} & 2z & {}={} & 7 \\ -x & & & {}+{} & 4z & {}={} & -3 \\ \end{alignedat} \end{equation*}
and perform Gaussian elimination to describe the solution space of the system of equations in as much detail as you can.
- Suppose that you have a system of linear equations in the unknowns \(x\) and \(y\) whose augmented matrix is row equivalent to
\begin{equation*} \left[ \begin{array}{rr|r} 1 & 0 & 3 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ \end{array} \right]. \end{equation*}
Write the system of linear equations corresponding to the augmented matrix. Then describe the solution set of the system of equations in as much detail as you can.
- Suppose that you have a system of linear equations in the unknowns \(x\) and \(y\) whose augmented matrix is row equivalent to
\begin{equation*} \left[ \begin{array}{rr|r} 1 & 0 & 3 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right]. \end{equation*}
Write the system of linear equations corresponding to the augmented matrix. Then describe the solution set of the system of equations in as much detail as you can.
- Suppose that the augmented matrix of a system of linear equations has the following shape where \(*\) could be any real number.
\begin{equation*} \left[ \begin{array}{rrrrr|r} * & * & * & * & * & * \\ * & * & * & * & * & * \\ * & * & * & * & * & * \\ \end{array} \right]. \end{equation*}
- How many equations are there in this system and how many unknowns?
- Based on our earlier discussion in Section 1.1, do you think it's possible that this system has exactly one solution, infinitely many solutions, or no solutions?
- Suppose that this augmented matrix is row equivalent to
\begin{equation*} \left[ \begin{array}{rrrrr|r} 1 & 2 & 0 & 0 & 3 & 2 \\ 0 & 0 & 1 & 2 & -1 & -1 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right]. \end{equation*}
Make a choice for the names of the unknowns and write the corresponding system of linear equations. Does the system have exactly one solution, infinitely many solutions, or no solutions?
Reduced row echelon form
There is a special class of matrices whose form makes it especially easy to describe the solution space of the corresponding linear system. As we describe the properties of this class of matrices, it may be helpful to consider an example, such as the following matrix.
\begin{equation*} \left[ \begin{array}{rrrrrr} 1 & * & 0 & * & 0 & * \\ 0 & 0 & 1 & * & 0 & * \\ 0 & 0 & 0 & 0 & 1 & * \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right]. \end{equation*}
Definition 1.2.3
We say that a matrix is in reduced row echelon form if the following properties are satisfied.
- Any rows in which all the entries are zero are at the bottom of the matrix.
- If we move across a row from left to right, the first nonzero entry we encounter is 1. We call this entry the leading entry in the row.
- The leading entry in one row is to the right of the leading entry in any row above.
- A leading entry is the only nonzero entry in its column.
We call a matrix in reduced row echelon form a reduced row echelon matrix.
We have been intentionally vague about whether the matrix we are considering is an augmented matrix corresponding to a linear system or a coefficient matrix since we will eventually consider both possibilities.
Activity 1.2.4. Identifying reduced row echelon matrices.
Consider each of the following augmented matrices. Determine if the matrix is in reduced row echelon form. If it is not, perform a sequence of scaling, interchange, and replacement operations to obtain a row equivalent matrix that is in reduced row echelon form. Then use the reduced row echelon matrix to describe the solution space.
- \(\displaystyle \left[ \begin{array}{rrr|r} 2 & 0 & 4 & -8 \\ 0 & 1 & 3 & 2 \\ \end{array} \right].\)
- \(\displaystyle \left[ \begin{array}{rrr|r} 1 & 0 & 0 & -1 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & 1 \\ \end{array} \right].\)
- \(\displaystyle \left[ \begin{array}{rrr|r} 1 & 0 & 4 & 2 \\ 0 & 1 & 3 & 2 \\ 0 & 0 & 0 & 1 \\ \end{array} \right].\)
- \(\displaystyle \left[ \begin{array}{rrr|r} 0 & 1 & 3 & 2 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 4 & 2 \\ \end{array} \right].\)
- \(\displaystyle \left[ \begin{array}{rrr|r} 1 & 2 & -1 & 2 \\ 0 & 1 & -2 & 0 \\ 0 & 0 & 1 & 1 \\ \end{array} \right].\)
If we are given a matrix, the examples in the previous activity indicate that there is a sequence of row operations that produces a matrix in reduced row echelon form. Moreover, the conditions that define reduced row echelon matrices guarantee that this matrix is unique.
Theorem 1.2.4.
Given a matrix, there is exactly one reduced row echelon matrix to which it is row equivalent.
Once we have this reduced row echelon matrix, we may describe the set of solutions to the corresponding linear system with relative ease.
Example 1.2.5. Describing the solution space from a reduced row echelon matrix
- Consider the row reduced echelon matrix
\begin{equation*} \left[ \begin{array}{rrr|r} 1 & 0 & 2 & -1 \\ 0 & 1 & 1 & 2 \\ \end{array} \right]. \end{equation*}
Its corresponding linear system may be written as
\begin{equation*} \begin{alignedat}{4} x & & & {}+{} & 2z & {}={} & -1 \\ & & y & {}+{} & z & {}={} & 2. \\ \end{alignedat} \end{equation*}
Let's rewrite the equations as
\begin{equation*} \begin{alignedat}{2} x & {}={} & -1 -2z\\ y & {}={} & 2-z. \\ \end{alignedat} \end{equation*}
From this description, it is clear that we obtain a solution for any value of the variable \(z\text{.}\) For instance, if \(z=2\text{,}\) then \(x = -5\) and \(y=0\) so that \((x,y,z) = (-5,0,2)\) is a solution. Similarly, if \(z=0\text{,}\) we see that \((x,y,z) = (-1,2,0)\) is also a solution.
Because there is no restriction on the value of \(z\text{,}\) we call it a free variable, and note that the linear system has infinitely many solutions. The variables \(x\) and \(y\) are called basic variables as they are determined once we make a choice of the free variable.
We will call this description of the solution space, in which the basic variables are written in terms of the free variables, a parametric description of the solution space.
- Consider the matrix
\begin{equation*} \left[ \begin{array}{rrr|r} 1 & 0 & 0 & 4 \\ 0 & 1 & 0 & -3 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ \end{array} \right]. \end{equation*}
The last equation gives
\begin{equation*} 0x +0y+0z = 0\text{,} \end{equation*}
which is true for any \((x,y,z)\text{.}\) We may safely ignore this equation since it does not provide a restriction on the choice of \((x,y,z)\text{.}\) We then see that there is a unique solution \((x,y,z) = (4,-3,1)\text{.}\)
- Consider the matrix
\begin{equation*} \left[ \begin{array}{rrr|r} 1 & 0 & 2 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right]. \end{equation*}
Beginning with the last equation, we see that
\begin{equation*} 0x +0y+0z = 0 = 1\text{,} \end{equation*}
which is not true for any \((x,y,z)\text{.}\) There is no solution to this particular equation and therefore no solution to the system of equations.
Summary
We saw several importants concepts in this chapter.
- We can describe the solution space to a linear system by transforming it into a new linear system through a sequence of scaling, interchange, and replacement operations.
- We represented a system of linear equations by an augmented matrix. Using scaling, interchange, and replacement operations, the augmented matrix is row equivalent to exactly one reduced row echelon matrix.
- The reduced row echelon matrix allows us to easily describe the solution space to a system of linear equations.
Exercises 1.2.5Exercises
1
For each of the linear systems below, write the associated augmented matrix and find the reduced row echelon matrix that is row equivalent to it. Identify the basic and free variables and then describe the solution space of the original linear system using a parametric description, if appropriate.
-
\begin{equation*} \begin{alignedat}{3} 2x & {}+{} & y & {}={} & 0 \\ x & {}+{} & 2y & {}={} & 3 \\ -2x & {}+{} & 2y & {}={} & 6 \\ \end{alignedat} \end{equation*}
-
\begin{equation*} \begin{alignedat}{5} -x_1 & {}+{} & 2x_2 & & & {}+{} & x_3 & {}={} & 2 \\ 3x_1 & & & & & {}+{} & 2x_3 & {}={} & -1 \\ -x_1 & {}-{} & x_2 & & & {}+{} & x_3 & {}={} & 2 \\ \end{alignedat} \end{equation*}
-
\begin{equation*} \begin{alignedat}{5} x_1 & {}+{} & 2x_2 & {}-{} & 5x_3 & {}-{} & x_4 & {}={} & -3 \\ -2x_1 & {}-{} & 2x_2 & {}+{} & 6x_3 & {}-{} & 2x_4 & {}={} & 4 \\ x_1 & & & {}-{} & x_3 & {}+{} & 9x_4 & {}={} & 7 \\ & & -x_2 & {}+{} & 2x_3 & {}-{} & x_4 & {}={} & 4 \\ \end{alignedat} \end{equation*}
2
Consider each matrix below and determine if it is in reduced row echelon form. If not, indicate the reason and apply a sequence of row operations to find its reduced row echelon matrix. For each matrix, indicate whether the linear system has infinitely many solutions, exactly one solution, or no solutions.
-
\begin{equation*} \left[ \begin{array}{rrrr|r} 1 & 1 & 0 & 3 & 3 \\ 0 & 1 & 0 & -2 & 1 \\ 0 & 0 & 1 & 3 & 4 \\ \end{array} \right] \end{equation*}
-
\begin{equation*} \left[ \begin{array}{rrrr|r} 1 & 0 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 & 0 \\ 0 & 0 & -3 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ \end{array} \right] \end{equation*}
-
\begin{equation*} \left[ \begin{array}{rrrr|r} 1 & 0 & 0 & 3 & 3 \\ 0 & 1 & 0 & -2 & 1 \\ 0 & 0 & 1 & 3 & 4 \\ 0 & 0 & 0 & 3 & 3 \\ \end{array} \right] \end{equation*}
-
\begin{equation*} \left[ \begin{array}{rrrr|r} 0 & 0 & 1 & 0 & -1 \\ 0 & 1 & 0 & 0 & 3 \\ 1 & 1 & 1 & 1 & 2 \\ \end{array} \right] \end{equation*}
3
Give an example of a reduced row echelon matrix that describes a linear system having the stated properties. If it is not possible to find such an example, explain why not.
- Write a reduced row echelon matrix for a linear system having five equations and three unknowns and having exactly one solution.
- Write a reduced row echelon matrix for a linear system having three equations and three unknowns and having no solution.
- Write a reduced row echelon matrix for a linear system having three equations and five unknowns and having infinitely many solutions.
- Write a reduced row echelon matrix for a linear system having three equations and four unknowns and having exactly one solution.
- Write a reduced row echelon matrix for a linear system having four equations and four unknowns and having exactly one solution.
4
For each of the questions below, provide a justification for your response.
- What does the presence of a row whose entries are all zero in an augmented matrix tell us about the solution space of the linear system?
- How can you determine if a linear system has no solutions directly from its reduced row echelon matrix?
- How can you determine if a linear system has infinitely many solutions directly from its reduced row echelon matrix?
- What can you say the solution space of a linear system if there are more unknowns than equations and at least one solution exists?
5
Determine whether the following statements are true or false and explain your reasoning.
- If every variable is basic, then the linear system has exactly one solution.
- If two augmented matrices are row equivalent to one another, then they describe two linear systems having the same solution spaces.
- The presence of a free variable indicates that there are no solutions to the linear system.
- If a linear system has exactly one solution, then it must have the same number of equations as unknowns.
- If a linear system has the same number of equations as unknowns, then it has exactly one solution.