Section 2.4 Invertible Matrix Theorem
Investigation 2.4.1.
In many texts there is a long list of equivalent conditions for when a square matrix is invertible. Below is a list of some of these conditions that we have talked about or proven. Go back through your notes and questions and cite when we connected two of the ideas in the list. For instance, parts
\(a)\) and
\(b)\) are linked by
Investigation 2.3.2
Before stating this major theorem, we should explain what the phrase “the following are equivalent” (sometimes written “TFAE” in scratchwork or on the board) means. A theorem of this type is essentially a giant if and only if theorem. Specifically, each statement in the theorem is true or each statement in the theorem is false. It is not possible for some to be true and some to be false. In a theorem with, say, three statements, we often prove that statement 1 implies statement 2, statement 2 implies statement 3, and statement three implies statement 1. Then you can start at any statement and reach any other statement, showing that if one is true, all the others must be true. However, with longer lists, we sometimes have to prove things a bit more piecemeal.
Theorem 2.4.1. The Invertible Matrix Theorem.
Let \(A\) be a \(n\) by \(n\) matrix. The following are equivalent:
\(A\) is an invertible matrix.
\(A\) is row equivalent to \(Id_n\text{.}\)
\(A\) has \(n\) pivots.
\(A\vec{x} =\vec{0}\) has only the trivial solution.
The linear transformation \(\vec{x} \rightarrow A\vec{x}\) is one-to-one.
The linear transformation \(\vec{x} \rightarrow A\vec{x}\) is onto.
\(A\vec{x}=\vec{b}\) has a solution for every \(\vec{b} \in \mathbb{R}^n\text{.}\)
The columns of \(A\) form a linearly independent set.
The columns of \(A\) span \(\mathbb{R}^n\text{.}\)
The rows of \(A\) form a linearly independent set.
The rows of \(A\) span \(\mathbb{R}^n\text{.}\)
\(A^T\) is invertible.
Investigation 2.4.2.
Two important ideas in this course that have been tied to many different methods or ideas are 1) consistent systems of linear equations and 2) invertible matrices. These two ideas are a bit different though. Give an example of a consistent system of linear equations (in matrix equation form \(A\vec{x} = \vec{b}\)) where the coefficient matrix \(A\) is a non-invertible square matrix.