# Diagonal Matrices of Linear Operators

So far we have looked at linear operators $T \in \mathcal L (V)$ of finite-dimensional vector spaces $V$ for which certain bases $B_V$ of $V$ may yield an upper triangular matrix $\mathcal M (T, B_V)$. We will now look more into even simpler matrices for linear operators – diagonal matrices (which are themselves of course, upper triangular matrices). First let’s look at the following definition.

Definition: If $V$ is a finite-dimensional vector space, then $T \in \mathcal L (V)$ is said to be Diagonalizable if there exists a basis $B_V$ such that $\mathcal M (T, B_V)$ is a diagonal matrix. |

The following proposition will tell us that if $T$ has $\mathrm{dim} (V)$ *distinct* eigenvalues, then there exists a basis $B_V$ of $V$ for which $\mathcal M (T, B_V)$ is a diagonal matrix.

Proposition 1: If $T \in \mathcal L (V)$ has $\mathrm{dim} (V) = n$ distinct eigenvalues then there exists a basis $B_V$ of $V$ for which $\mathcal M (T, B_V)$ is a diagonal matrix. |

**Proof:**Let $T \in \mathcal L(V)$ and suppose that $T$ has $\mathrm{dim} (V) = n$ distinct eigenvalues, call them $\lambda_1, \lambda_2, …, \lambda_n \in \mathbb{F}$. For each of these eigenvalues, choose a nonzero eigenvalue $v_j \in V$ that corresponds to $\lambda_j$ for $j = 1, 2, …, n$. Now we know that these nonzero eigenvectors $v_1$, $v_2$, …, $v_n$ are linearly independent and there are $n$ of them, so $B_V = \{ v_1, v_2, …, v_n \}$ forms a basis of $V$. Note that $T(v_j) = \lambda_jv_j$ for $j = 1, 2, …, n$ and thus:

(1)

- Therefore $\mathcal M (T, B_V)$ is a diagonal matrix. $\blacksquare$

Now the following theorem will give us a bunch of equivalent statements if $T$ has a diagonal matrix with respect to some basis $B_V$ of $V$.

Theorem 1: If $T \in \mathcal L (V)$ for the finite-dimensional vector space $V$ and $\lambda_1, \lambda_2, …, \lambda_m$ are distinct eigenvalues of $T$, then the following statements are equivalent:a) There exists a basis $B_V$ of $V$ such that $\mathcal M (T, B_V)$ is a diagonal matrix.b) $V$ has a basis consisting of corresponding nonzero eigenvectors from the eigenvalues $\lambda_1, \lambda_2, …, \lambda_m$.c) There exists one-dimensional subspaces $U_1, U_2, …, U_n$ of $V$ that are invariant under $T$ and such that $V = \bigoplus_{i=1}^{n} U_i$.d) $V = \bigoplus_{i=1}^{m} \mathrm{null} (T – \lambda_iI)$.e) $\mathrm{dim} V = \sum_{i=1}^{m} \mathrm{dim} ( \mathrm{null} (T – \lambda_iI))$. |

**Proof $a) \implies b)$:**This was proven in Proposition 1 above.

**Proof $b) \implies c)$:**Suppose that $V$ has a basis consisting of corresponding nonzero eigenvectors from the eigenvalues $\lambda_1, \lambda_2, …, \lambda_m$, and let $\{ v_1, v_2, …, v_n \}$ be this basis of eigenvectors. Let $U_j = \mathrm{span} (v_j)$ for $j = 1, 2, …, n$. Then clearly $V = \sum_{i=1}^{n} U_i$.

- Each $U_j$ is a one-dimensional subspace of $V$. To show that each $U_j$ is invariant under $T$, let $u \in U_j = \mathrm{span} (v_j)$. Then for some $a \in \mathbb{F}$ we have that $u = av_j$. So $T(u) = T(av_j) = aT(v_j) = a \lambda v_j \in \mathrm{span} (v_j) = U_j$.

- Now since $\{ v_1, v_2, …, v_n \}$ is a basis of $V$, we have that this basis is linearly independent and so for every $v \in V$ we have that for some scalars $a_1, a_2, …, a_n \in \mathbb{F}$ that $v$ can be written uniquely as a linear combination of these basis vectors:

(2)

- Since $U_j = \mathrm{span} (v_j)$ for $j = 1, 2, …, n$ we have that $v$ can be uniquely written as:

(3)

- Therefore $V = \bigoplus_{i=1}^{n} U_i$.

**Proof of $c) \implies b)$:**Suppose that there exists one-dimensional subspaces $U_1, U_2, …, U_n$ such that each is invariant under $T$ and $V = \bigoplus_{i=1}^{n} U_i$. Let $v_j \in U_j$ be a nonzero vector for each $j = 1, 2, …, n$. Each of these vectors $v_j$ is an eigenvector of $T$. Now since $V = \bigoplus_{i=1}^{n} U_i$, we have that $V = \sum_{i=1}^{n} U_i$ and so every $v \in V$ can be written uniquely as a sum $v = u_1 + u_2 + … + u_n$ where $u_j \in U_j$ for $j = 1, 2, …, n$. But $u_j$ is a scalar multiple of the nonzero vectors $v_j$ since $v_j \in U_j$ and $U_j$ is one-dimensional, and so $\{ v_1, v_2, …, v_n \}$ must be a basis of $V$.

**Proof of $b) \implies d)$:**Suppose that $V$ has a basis of corresponding nonzero eigenvectors from the eigenvalues $\lambda_1, \lambda_2, …, \lambda_m$, call it $\{v_1, v_2, …, v_n \}$. Then if $v \in V$ we have that $v$ is a linear combination of these basis vectors. But the set of linear combinations of each of these eigenvalues is $\mathrm{null} (T – \lambda_jI)$ for $j = 1, 2, …, n$ and so

(4)

- We now need to show that this sum is direct. Suppose that $0 = u_1 + u_2 + … + u_m$ where $u_j \in \mathrm{null} (T – \lambda_jI)$ for $j = 1, 2, …, m$. Now since $u_j \in \mathrm{null} (T – \lambda_jI)$, then $u_j$ is an eigenvector of $\lambda_j$. But $\lambda_1, \lambda_2, …, \lambda_m$ are distinct eigenvalues and so this implies that $u_1 = u_2 = … = u_m = 0$, and so $V = \bigoplus_{i=1}^{m} \mathrm{null} (T – \lambda_iI)$.

**Proof of $d) \implies e)$:**Suppose that $V = \bigoplus_{i=1}^{m} \mathrm{null} (T – \lambda_i)$. Since $V$ is finite-dimensional and $\mathrm{null} (T – \lambda_i)$ for $i = 1, 2, .., m$ are subspace of $V$, it follows immediately that $\mathrm{dim} V = \mathrm{dim} ( \mathrm{null} (T – \lambda_1I)) + \mathrm{dim} ( \mathrm{null} (T – \lambda_2I)) + … + \mathrm{dim} ( \mathrm{null} (T – \lambda_mI))$.

**Proof of $e) \implies b)$:**Suppose that $n = \mathrm{dim} (V) = \sum_{i=1}^{m} \mathrm{dim} ( \mathrm{null} (T – \lambda_iI))$. Since $\mathrm{null} (T – \lambda_iI)$ for $j = 1, 2, …, m$ is a subspace of $V$, then choose a basis for each $\mathrm{null} (T – \lambda_jI)$, and take all of these bases to form a set of $n$ eigenvectors $\{ v_1, v_2, …, v_n \}$ of $T$. We want to show that this set of vectors is linearly independent to show that it is a basis of $V$.

- Suppose that for $a_1, a_2, …, a_n \in \mathbb{F}$ we have that $a_1v_1 + a_2v_2 + … + a_nv_n = 0$. Then for $j = 1, 2, …, m$ let $u_j$ be equal to the sum of terms $a_kv_k$ such that $v_k \in \mathrm{null}(T – \lambda_jI)$. Each of these $u_j$ is an eigenvector of $T$ that corresponds to the distinct eigenvalue $\lambda_j$ and so $u_1 + u_2 + … + u_m = 0$. But $u_1 = u_2 = … = u_m = 0$ since $\lambda_1, \lambda_2, …, \lambda_m$ are distinct eigenvalues.

- Now since $u_j$ is equal to the sum of terms $a_kv_k$ such that $v_k \in \mathrm{null} (T – \lambda_jI)$, then the corresponding bases of $v_k$‘s implies that all of the terms $a_k$ are equal to zero. Thus $\{ v_1, v_2, …, v_n \}$ is a linearly independent set of $n$ eigenvectors, so $\{ v_1, v_2, …, v_n \}$ is a basis of $V$. $\blacksquare$

### Related post:

- Grade Nine learners taught mathematics skills – Tembisan
- A Library Browse Leads Math’s Bill Dunham to Question the Origins of The Möbius Function – Bryn Mawr Now
- Year 5 and 6 students to sit competition this Wednesday – Great Lakes Advocate
- USC student wins silver medal in China math contest – SunStar Philippines
- CBSE Exam 2020: Two separate examinations to be conducted for Class 10 Mathematics – Jagran Josh
- Concepts incomplete, problems unsolvable in math textbooks – Times of India
- Education Ministry to Host Tertiary and Employment Fairs – Government of Jamaica, Jamaica Information Service
- Vogue’s Edwina McCann and Westpac’s Anastasia Cammaroto on how they inspire women to pursue STEM – Vogue Australia
- Jonee Wilson, Temple Walkowiak to Measure High-Quality Instructional Practices to Support Marginalized Students in Rigorous Mathematics through NSF Grant – NC State College of Education
- Australian Conference on Science and Mathematics Education – Australian Academy of Science