3.2 Bases and dimension

3.2.1 Linear combinations

If \(\mathbf{v}_1,\ldots \mathbf{v}_n\) are elements of an \(\FF\)-vector space V then a linear combination of \(\mathbf{v}_1,\ldots,\mathbf{v}_n\) is an element of V of the form

\[l_1 \mathbf{v}_1 + \cdots + l_n \mathbf{v}_n \]

for some \(l_1,\ldots,l_n \in \FF\). The \(l_i\) are called the coefficients in this linear combination. A non-trivial linear combination is one where not all the coefficients are zero.

This allows us to rephrase our definition of subspace: a non-empty subset U of V is a subspace if and only if every linear combination of elements of U is again in U.

3.2.2 Linear independence

A sequence of vectors is an ordered list, written \((\mathbf{u}, \mathbf{v}, \mathbf{w},\ldots)\) or just \(\mathbf{u},\mathbf{v},\mathbf{w},\ldots\). Ordered means that, for example \((\mathbf{u},\mathbf{v}) \neq (\mathbf{v},\mathbf{u})\). Sequences are different to sets for two reasons: first \(\{\mathbf{v},\mathbf{u}\}\) is the same set as \(\{ \mathbf{u},\mathbf{v}\}\) — order doesn’t matter to sets — and secondly \(\{\mathbf{u},\mathbf{u}\} = \{\mathbf{u}\}\) whereas \((\mathbf{u},\mathbf{u}) \neq (\mathbf{u} )\).

Definition 3.3 A sequence \(\mathbf{v}_1,\ldots ,\mathbf{v}_n\) of elements of the \(\FF\)-vector space V is called linearly independent if the only linear combination \[\begin{equation} \tag{3.1} a_1 \mathbf{v}_1 + \cdots + a_n \mathbf{v}_n \end{equation}\] equal to the zero vector is the one where \(a_1 = \cdots = a_n = 0\).

A sequence which is not linearly independent is called linearly dependent, and an equation (3.1) in which not all the \(a_i=0\) is called a linear dependence relation for \(\mathbf{v}_1, \ldots, \mathbf{v}_n\).


To check if a sequence \(\mathbf{v}_1,\ldots,\mathbf{v}_n\) of elements of V is linearly independent, you have to see if there are any non-zero solutions to the equation \[ l_1\mathbf{v}_1+ \cdots + l_n \mathbf{v}_n = \mathbf{0}_V. \]

Notice that if the sequence \(\mathbf{v}_1,\ldots,\mathbf{v}_n\) contains the same element twice, it is is linearly dependent: if \(\mathbf{v}_i = \mathbf{v}_j\) for \(i \neq j\) then \(\mathbf{v}_i-\mathbf{v}_j=\mathbf{0}_V\) is a nontrivial linear dependence relation.

By convention we regard the empty subset \(\emptyset\) of a vector space V as being linearly independent.

Example 3.3

  • The vectors \(\mathbf{x} =\begin{pmatrix} 1 \\ 0 \end{pmatrix}, \mathbf{y} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}\) are linearly independent in \(\mathbb{R}^2\). For suppose that \(\lambda \mathbf{x} + \mu \mathbf{y} = \mathbf{0} _{\rr^2}\). Then we have \[\begin{equation*} \begin{pmatrix} \lambda \\ 0 \end{pmatrix} + \begin{pmatrix} \mu \\ \mu \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \end{equation*}\] which is equivalent to saying that \(\lambda + \mu = 0\) and \(\mu=0\). It follows that \(\lambda=\mu=0\), which means those two vectors were linearly independent.
  • The one-element sequence \(\mathbf{0}_V\) isn’t linearly independent: \(1 \mathbf{0}_V = \mathbf{0}_V\) so there is a nontrivial linear dependence relation on \(\{\mathbf{0}_V\}\).
  • \(\begin{pmatrix} 1 \\ 0 \end{pmatrix},\begin{pmatrix} -2 \\ 0 \end{pmatrix}\) are not linearly independent in \(\RR^2\), since we can make the zero vector of \(\RR^2\) as a non-trivial linear combination of these two vectors: \[\begin{equation*} \begin{pmatrix} 0\\0 \end{pmatrix} = 2 \begin{pmatrix} 1 \\ 0 \end{pmatrix} + \begin{pmatrix} -1 \\ 0 \end{pmatrix} \end{equation*}\]
  • The vectors \(\mathbf{e}_1,\ldots,\mathbf{e}_n\) in \(\RR^n\) are linearly independent. For \[\begin{equation*} \sum_{i=1}^n \lambda_i \mathbf{e_i} = \begin{pmatrix} \lambda_1 \\ \vdots \\ \lambda_n \end{pmatrix} \end{equation*}\] and the only way for this to be the zero vector is if all of the coefficients \(\lambda_i\) are zero.

3.2.3 Spanning sequences

Definition 3.4 Let V be an \(\FF\)-vector space and \(\mathbf{s}_1,\ldots \mathbf{s}_n \in V\). The span of \(\mathbf{s}_1,\ldots, \mathbf{s}_n\), written \(\spa ( \mathbf{s} _1,\ldots, \mathbf{s} _n)\), is the set of all linear combinations of \(\mathbf{s}_1,\ldots, \mathbf{s}_n\).

So \[\spa ( \mathbf{s} _1,\ldots, \mathbf{s} _n) = \{ l_1 \mathbf{s}_1 + \cdots + l_n \mathbf{s}_n: l_1, \ldots, l_n \in \ff \}\] The span of the empty set of vectors is defined to be \(\{ \mathbf{0}_V\}\).

Example 3.4

  • The span of a single element \(\mathbf{s}\) of an \(\FF\)-vector space V is \(\{ l \mathbf{s} : l \in \FF\}\), since any linear combination of \(\mathbf{s}\) is just a scalar multiple of \(\mathbf{s}\).
  • Let \(\mathbf{u} = \begin{pmatrix} 1\\ 0 \\ 0 \end{pmatrix}, \mathbf{v} = \begin{pmatrix} 0\\1\\0 \end{pmatrix} \in \rr^3\). Then \(\spa ( \mathbf{u}, \mathbf{v} )\) is the set \[\begin{equation*} \left\{ \begin{pmatrix} \lambda \\ \mu \\ 0 \end{pmatrix} : \lambda,\mu \in \rr \right\} \end{equation*}\]
  • The span of \(\mathbf{u} , \mathbf{v}\), and \(\mathbf{w} = \begin{pmatrix} 1\\2\\0 \end{pmatrix}\) is equal to \(\spa ( \mathbf{u} , \mathbf{v} )\).

Lemma 3.4 \(\spa ( \mathbf{s} _1,\ldots, \mathbf{s} _n)\) is a subspace of V.

Proof. Write S for \(\spa ( \mathbf{s} _1,\ldots, \mathbf{s} _n)\). Recall that S consists of every linear combination \(\sum_{i=1}^n \lambda_i\mathbf{s}_i\), where the \(\lambda_i\) are scalars.

  • S contains the zero vector because it contains \(\sum_{i=1}^n 0\mathbf{s}_i\).
  • S is closed under addition because if \(\sum_{i=1}^n \lambda _i \mathbf{s}_i\) and \(\sum _{i=1}^n \mu_i \mathbf{s}_i\) are any two elements of S then \[\sum_{i=1}^n \lambda_i \mathbf{s}_i + \sum_{i=1}^n \mu_i \mathbf{s}_i = \sum_{i=1}^n (\lambda_i + \mu_i) \mathbf{s}_i\] is in S.
  • S is closed under scalar multiplication because if \(\sum_{i=1}^n \lambda_i \mathbf{s}_i\) is in S and \(\lambda\) is a scalar then \[ \lambda \sum_{i=1}^n \lambda_i \mathbf{s}_i = \sum_{i=1}^n (\lambda \lambda_i) \mathbf{s}_i\] is also in S.

Definition 3.5 Elements \(\mathbf{s}_1, \ldots, \mathbf{s}_n\) of the \(\FF\)-vector space V are called a spanning sequence if \(\spa( \mathbf{s}_1, \ldots, \mathbf{s}_n) = V\).


In other words, \(\mathbf{s}_1, \ldots, \mathbf{s}_n\) is a spanning sequence if every element of V can be written as a linear combination of elements \(\mathbf{s}_1, \ldots, \mathbf{s}_n\).

Lemma 3.5 If \(U \leq V\) and U contains a spanning sequence \(\mathcal{S}\) for V, then \(U=V\).
Proof. U is closed under taking linear combinations of elements of U, so it contains every linear combination of elements of \(\mathcal{S}\), but every element of V is a linear combination of \(\mathcal{S}\), so U contains every element of V.

3.2.4 Finite-dimensional vector spaces

Definition 3.6 An \(\FF\)-vector space is called finite-dimensional if it has a spanning sequence.

(Notice that our ‘spanning sequences’ are always finite sequences - this is where the name ‘finite-dimensional’ comes from).

So V is finite dimensional if there is a finite sequence \(\mathbf{v}_1,\ldots,\mathbf{v}_n\) of elements of V such that any element of V is a linear combination of the \(\mathbf{v}_i\). Some examples of vector spaces which aren’t finite-dimensional are the vector space \(\{(a_0, a_1, \ldots) : a_i \in \mathbb{R}\}\) of all infinite real sequences, the vector space of all polynomials in one variable, and the vector space of all functions \(\mathbb{R} \to \mathbb{R}\).

Example 3.5 Recall that the standard basis vector \(\mathbf{e}_i \in \RR^n\) is the height n column vector whose entries are all zero except for the ith, which is one. Then the \(\mathbf{e}_i\) form a spanning sequence for the real vector space \(\RR^n\) since any vector \[\begin{equation*} \mathbf{v} = \begin{pmatrix} v_1 \\ \vdots \\ v_n \end{pmatrix} \end{equation*}\] can be written as a linear combination of the \(\mathbf{e}_i\) as follows: \[\begin{equation*} \mathbf{v} = v_1 \mathbf{e}_1 + \cdots + v_n \mathbf{e}_n. \end{equation*}\] It follows that \(\RR^n\) is finite-dimensional.
Example 3.6 The vectors \[\begin{equation*} \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \text{ and } \begin{pmatrix} 0\\2\\2 \end{pmatrix} \end{equation*}\] form a spanning sequence for the vector space in the first part of Example 3.2.

3.2.5 Bases and dimension

Definition 3.7 A basis of a vector space is a linearly independent sequence of vectors which is also a spanning sequence.

Example 3.7

  • In Example 3.3 part 3 above we showed \(\mathbf{e}_1,\ldots,\mathbf{e}_n\) is linearly independent, and in Example 3.5 we showed it was a spanning sequence of \(\RR^n\). Thus \(\mathbf{e}_1,\ldots,\mathbf{e}_n\) is a basis of \(\RR^n\). This is called the standard basis of \(\rr^n\).
  • Let \(M = M_{2\times 2}(\RR)\) be the \(\RR\)-vector space of all 2✕2 real matrices, so the zero vector \(\mathbf{0}_M\) is the 2✕2 zero matrix. Let \(E_{ij}\) be the matrix with a 1 in position i, j and 0 elsewhere. Any element of M looks like \(\begin{pmatrix} a & b \\ c & d \end{pmatrix}\) for some \(a,b,c,d \in \RR\), and \[\begin{equation*} \begin{pmatrix} a & b \\ c & d \end{pmatrix}=a E_{11} + b E_{12} + cE_{21} + dE_{22}. \end{equation*}\] It follows that \(E_{11}, E_{12}, E_{21}, E_{22}\) are a spanning sequence for M. They are also linearly independent, for if \(\alpha,\beta,\gamma,\delta\) are scalars such that \[\begin{equation*} \alpha E_{11}+\beta E_{12} + \gamma E_{21} + \delta E_{22} = \mathbf{0}_M \end{equation*}\] then \[\begin{equation*} \begin{pmatrix} \alpha & \beta \\ \gamma&\delta \end{pmatrix} =\begin{pmatrix} 0&0\\0&0 \end{pmatrix} \end{equation*}\] and so \(\alpha=\beta=\gamma=\delta=0\). Therefore \(E_{11}, E_{12}, E_{21}, E_{22}\) is a basis of M.

A basis of a vector space V is in particular a spanning sequence for V, so every element of V can be written as a linear combination of the elements of the basis. The following lemma shows that every element of V can be written one and only one way as a linear combination of the elements of a particular basis.

Lemma 3.6 Let \(\mathcal{B}= \mathbf{b} _1, \ldots, \mathbf{b} _n\) be a basis of a vector space V. If \(\lambda_i\) and \(\mu_i\) are scalars such that \(\sum_{i=1}^n \lambda_{i} \mathbf{b}_i = \sum_{i=1}^n \mu_{i} \mathbf{b}_i\), then \(\lambda_{i}=\mu_{i}\) for all i.
Proof. Rearranging we get \[\begin{equation*} \mathbf{0}_V=\sum_{i=1}^n (\lambda_i-\mu_i) \mathbf{b}_i \end{equation*}\] and since \(\mathcal{B}\) is linearly independent, \(\lambda_i-\mu_i=0\) for all i.

This result shows that a basis can be used to “coordinatize” an abstract finite-dimensional vector space - to show that it is essentially “just like” \(\mathbb{R}^n\) or \(\mathbb{C}^n\).

We would like to use the size of a basis of a vector space as a measure of the size of the vector space. The difficulty we have is that a vector space can have many different bases, and it is not clear that they should all have the same size. The next few results will help us prove this.

Lemma 3.7 if \(\mathbf{s}_1, \ldots, \mathbf{s}_m\) is a spanning sequence for a vector space V, \(\mathbf{v} = \sum_{i=1}^m \lambda_i \mathbf{s}_i\), and \(\lambda_j\neq 0\), then \[\mathbf{s}_1, \ldots, \mathbf{s}_{j-1}, \mathbf{v}, \mathbf{s}_{j+1}, \ldots, \mathbf{s}_m\] is also a spanning sequence for V.

Proof. Let \(S=\operatorname{span}(\mathbf{s}_1, \ldots, \mathbf{s}_{j-1}, \mathbf{v}, \mathbf{s}_{j+1}, \ldots, \mathbf{s}_m)\). Then S is a subspace of V, and we need to show that it equals V.

Notice that S contains all the \(\mathbf{s}_i\) except possibly \(\mathbf{s}_j\). In fact S contains \(\mathbf{s}_j\) too, as

\[ \mathbf{s}_j = \lambda_j^{-1} \mathbf{v} - \lambda_j^{-1} \sum _{i\neq j}\lambda_i \mathbf{s}_i\]

Therefore S contains every \(\mathbf{s}_i\). Since it is a subspace, S contains every linear combination of the \(\mathbf{s}_i\), so it contains V as \(\mathbf{s}_1, \ldots, \mathbf{s}_m\) span V.

The lemma says that if you have a spanning sequence and a vector \(\mathbf{v}\) then, so long as the condition holds, you can swap \(\mathbf{v}\) for one of the elements of the spanning sequence and it will still be a spanning sequence. We use it to prove the following key result:

Theorem 3.1 Let V be a vector space and suppose \(\mathbf{s}_1,\ldots, \mathbf{s}_m\) spans V and \(\mathbf{l}_1,\ldots, \mathbf{l}_n\) is a linearly independent sequence of elements of V. Then \(m \geq n\).

Proof. Since the \(\mathbf{s}_i\) are a spanning sequence, we can find scalars \(\lambda_i\) such that \(\mathbf{l}_1 = \sum_{i=1}^m \lambda_i \mathbf{s}_i\). Not all the \(\lambda_i\) can be zero as then \(\mathbf{l}_1\) would be the zero vector, but linearly independent set can’t contain the zero vector. Therefore without loss of generality - by renumbering if necessary - \(\lambda_1 \neq 0\). So \(\mathbf{l}_1, \mathbf{s}_2, \ldots, \mathbf{s}_m\) is a spanning set for V by Lemma 3.7.

Now \(\mathbf{l}_2 = \mu_1 \mathbf{l}_1 + \sum_{i\geq 2} \lambda_i \mathbf{s}_i\) for some scalars \(\mu_1, \lambda_i\). It can’t be \(\lambda_i=0\) all \(i\) because this would contradict the \(\mathbf{l}_i\) being linearly independent. So again without loss of generality \(\lambda_2\neq 0\), and \(\mathbf{l}_1, \mathbf{l}_2, \mathbf{s}_3, \ldots, \mathbf{s}_m\) is a spanning sequence.

If \(m < n\) then after \(m\) steps of this procedure we end up with \(\mathbf{l}_1, \ldots, \mathbf{l}_m\) being a spanning sequence, so \(\mathbf{l}_{m+1} \in \operatorname{span}(\mathbf{l}_1,\ldots, \mathbf{l}_m)\). This contradicts \(\mathbf{l}_1, \ldots, \mathbf{l}_n\) being linearly independent, so we’re done.
Corollary 3.1 Any two bases of a finite-dimensional vector space have the same size.
Proof. Let \(\mathcal{B} = \mathbf{b}_1, \ldots, \mathbf{b}_m\) and \(\mathcal{C} = \mathbf{c}_1, \ldots, \mathbf{c}_n\) be bases. Then in particular \(\mathcal{B}\) is a spanning sequence and \(\mathcal{C}\) is linearly indpendent, so \(m \geq n\). But also \(\mathcal{B}\) is linearly independent and \(\mathcal{C}\) is a spanning sequence, so \(n \geq m\).

Finally we can define dimension:

Definition 3.8 Let V be a finite dimensional vector space. The dimension of V, written \(\dim V\), is the size of any basis of V.

(It’s not yet obvious that every finite-dimensional vector space should have a basis, but we prove this shortly).


Example 3.8

  • From Example 3.7 we see that the dimension of the zero vector space is zero, \(\dim \RR^n = n\), and \(\dim M_{2\times 2}(\RR) = 4\).
  • You can generalize the calculation in Example 3.7 to prove that the dimension of \(\dim M_{n\times m}(\RR)\) and \(M_{n\times m}(\CC)\) is \(nm\).
  • Suppose V is a one-dimensional \(\mathbb{F}\)-vector space. It has a basis \(\mathbf{v}\) of size 1, and every element of V can be written as a linear combination of this basis, that is, a scalar multiple of \(\mathbf{v}\). So \(V = \{\lambda \mathbf{v} : \lambda \in \mathbb{F}\}\).

Example 3.9 Let V be the set of 3✕1 column vectors \(\begin{pmatrix} a\\b\\c \end{pmatrix}\) with real entries such that \(a+b+c=0\). You should check that V is a subspace of \(\rr^3\). To find \(\dim V\), we need a basis of V.

A typical element of V looks like \(\begin{pmatrix} a \\ b \\ -a-b \end{pmatrix}\), so a good start is to notice that \[\begin{equation} \tag{3.2} \begin{pmatrix} a\\b\\-a-b \end{pmatrix}= a \begin{pmatrix} 1\\0\\-1 \end{pmatrix} + b \begin{pmatrix} 0\\1\\-1 \end{pmatrix}. \end{equation}\] We might guess that the two vectors \(\mathbf{u} = \begin{pmatrix} 1\\0\\-1 \end{pmatrix}\) and \(\mathbf{v} = \begin{pmatrix} 0\\1\\-1 \end{pmatrix}\) are a basis. Since any element of V equals \(\begin{pmatrix} a\\b\\-a-b \end{pmatrix}\) for some \(a,b\), equation (3.2) shows that they are a spanning sequence. To check they are linearly independent, suppose that \(\lambda \mathbf{u} +\mu \mathbf{v} = \mathbf{0}_V\), so that \[\begin{equation*} \lambda \begin{pmatrix} 1\\0\\-1 \end{pmatrix} + \mu \begin{pmatrix} 0\\1\\-1 \end{pmatrix} = \begin{pmatrix} 0\\0\\0 \end{pmatrix} . \end{equation*}\] The vector on the right has entries \(\lambda, \mu, -\lambda-\mu\) so we have \(\lambda = \mu=0\). This shows that \(\mathbf{u}\) and \(\mathbf{v}\) are linearly independent, so they’re a basis of V which therefore has dimension 2.

3.2.6 Extending to a basis

We now show that any linearly independent sequence in a finite-dimensional vector space can be ‘extended to a basis’. The proof uses a lemma:

Lemma 3.8 (Extension lemma). Suppose \(\mathbf{l}_1, \ldots, \mathbf{l}_n\) are linearly independent. Let \(\mathbf{v} \notin \spa ( \mathbf{l} _1, \ldots, \mathbf{l} _n )\). Then \(\mathbf{l} _1,\ldots, \mathbf{l} _n, \mathbf{v}\) is linearly independent.

Proof. We’ll prove the contrapositive: if \(\mathbf{l} _1,\ldots, \mathbf{l}_n, \mathbf{v}\) is linearly dependent then \(\mathbf{v} \in \spa ( \mathbf{l} _1,\ldots, \mathbf{l}_n )\).

Suppose \(\mathbf{l} _1,\ldots, \mathbf{l} _n,\mathbf{v}\) is linearly dependent. There are scalars \(\lambda,\lambda_1,\ldots,\lambda_n\) not all zero such that \[\begin{equation*} \lambda \mathbf{v} + \sum_{i=1}^n \lambda_i \mathbf{v}_i = \mathbf{0}_V. \end{equation*}\] \(\lambda\) can’t be zero, for then this equation would say that \(\mathbf{l} _1,\ldots, \mathbf{l} _n\) was linearly dependent. Therefore we can rearrange to get \[\begin{equation*} \mathbf{v} = -\lambda^{-1} \sum_{i=1}^n \lambda_i \mathbf{l}_i=\sum_{i=1}^n (-\lambda^{-1} \lambda_i)\mathbf{l}_i \in \spa ( \mathbf{l} _1,\ldots, \mathbf{l} _n). \end{equation*}\]

Proposition 3.1 (Extending to a basis). Let V be finite-dimensional and let \(\mathbf{l} _1,\ldots, \mathbf{l}_n\) be linearly independent. Then there is a basis of V containing \(\mathbf{l} _1,\ldots, \mathbf{l} _n\).

Proof. Let \(\mathcal{L}=( \mathbf{l} _1,\ldots, \mathbf{l} _n)\). Since V is finite-dimensional it has a finite spanning sequence \(\{\mathbf{v}_1,\ldots,\mathbf{v}_m\}\). Define a sequence of subsets of V as follows: \(\mathcal{S}_0 = \mathcal{L}\), and for \(i\geq 0\), \[\begin{equation*}\mathcal{S}_{i+1}= \begin{cases} \mathcal{S}_i & \text{if }\mathbf{v}_{i+1} \in \spa \mathcal{S}_i \\ \mathcal{S}_i , \mathbf{v}_{i+1} & \text{otherwise.}\end{cases} \end{equation*}\]
Note that in either case \(\mathbf{v}_{i+1} \in \spa \mathcal{S}_{i+1}\), and also that \(\mathcal{S}_0 \subseteq \mathcal{S}_1 \subseteq \cdots \subseteq \mathcal{S}_m\).

Each set \(\mathcal{S}_i\) is linearly independent by Lemma 3.8, and in particular \(\mathcal{S}_m\) is linearly independent. Furthermore \(\spa \mathcal{S}_m\) contains the spanning sequence \(\{\mathbf{v}_1,\ldots, \mathbf{v}_m\}\) because for each i we have \(\mathbf{v}_i \in \spa \mathcal{S}_i \subseteq \spa \mathcal{S}_m\), so by Lemma 3.5, \(\spa \mathcal{S}_m = V\). Therefore \(\mathcal{S}_m\) is a basis containing \(\mathcal{L}\).

Corollary 3.2 Every finite-dimensional vector space has a basis.
Proof. Apply the previous lemma to the linearly independent set \(\emptyset\).

Corollary 3.3 If \(\dim V=n\) then any \(n+1\) elements of V are linearly dependent.
Proof. V has a basis, hence a spanning set, of size n. Any linearly independent sequence therefore has size at most n by Theorem 3.1.

3.2.7 Dimensions of subspaces

If dimension is really a good measure of the size of a vector space, then when U is a subspace of V we ought to have \(\dim U \leq \dim V\). But it isn’t obvious from the definitions that a subspace of a finite-dimensional vector space even has a dimension, so we need the following:

Lemma 3.9 If \(U \leq V\) and V is finite-dimensional then U is finite-dimensional.

Proof. Suppose for a contradiction that U is not finite-dimensional, so it is not spanned by any finite set of elements of U.

We claim that for any \(n\geq 0\) there exists a linearly independent subset of U of size n. The proof is by induction, and for \(n=0\) the empty set works. For the inductive step, suppose \(\mathcal{L}\) is a linearly independent subset of U of size n. Since U is not spanned by any finite set of its elements, there exists \(\mathbf{u}\in U \setminus \spa \mathcal{L}\). Then \(\mathcal{L}\cup \{\mathbf{u}\}\) is linearly independent by Lemma 3.8 and has size \(n+1\), completing the inductive step.

In particular there is a linearly independent subset of V with size \(\dim V+1\), contradicting Corollary 3.3.

Proposition 3.2 Let U be a subspace of the finite-dimensional vector space V. Then

  • \(\dim U \leq \dim V\), and
  • if \(\dim U=\dim V\) then \(U=V\).

Proof.

  • U is finite-dimensional by Lemma 3.9, so it has a finite basis \(\mathcal{B}\). By Corollary 3.3, \(\mathcal{B}\), being a linearly independent subset of V, has size at most \(\dim V\). Therefore \(\dim U = | \mathcal{B} | \leq \dim V\).
  • If \(\dim U = \dim V\) and \(\mathbf{v} \in V \setminus U\) then \(\mathcal{B} , \mathbf{v}\) is linearly independent by Lemma 3.8. But it has size larger than \(\dim V\), contradicting Corollary 3.3. So \(V\setminus U=\emptyset\) and \(U=V\).