Selecione uma das palavras-chave à esquerda…

Linear AlgebraLinear Transformations

Tempo de leitura: ~40 min

Functions describe relationships between sets and thereby add dynamism and expressive power to set theory. Likewise, linear transformations describe linearity-respecting relationships between vector spaces. They are useful for understanding a variety of vector space phenomena, and their study gives rise to generalization of the notion of linear dependence which is very useful in numerical applications of linear algebra (including describing the structure of real-world datasets).

A linear transformation L is a function from one vector space to another which satisfies L(\alpha \mathbf{v} + \beta \mathbf{w}) = \alpha L(\mathbf{v}) + \beta L(\mathbf{w}). Geometrically, these are "flat maps": a function is linear if and only if it maps equally spaced lines to equally spaced lines or points.

In \mathbb{R}^2, reflection along the line y=x, defined by L\left(\begin{bmatrix} x \\\ y \end{bmatrix}\right) = \begin{bmatrix} y \\\ x \end{bmatrix}, is linear because

\begin{align*}L\left(\alpha \begin{bmatrix} x_1 \\\ y_1 \end{bmatrix} + \beta \begin{bmatrix} x_2 \\\ y_2 \end{bmatrix} \right) &= \begin{bmatrix} \alpha y_1 + \beta y_2 \\\ \alpha x_1 + \beta x_2 \end{bmatrix} \\\\ &= \alpha \begin{bmatrix} y_1 \\\ x_1 \end{bmatrix} + \beta \begin{bmatrix} y_2 \\\ x_2 \end{bmatrix} \\\\ &= \alpha L\left(\begin{bmatrix} x_1 \\\ y_1 \end{bmatrix}\right) + \beta L\left(\begin{bmatrix} x_2 \\\ y_2 \end{bmatrix}\right).\end{align*}

Many fundamental geometric transformations are linear. The figure below illustrates several linear transformations (as well as one nonlinear one, for comparison) from the plane to the plane. The leftmost column shows a square grid of points, and the rightmost column shows the images of those points. The other columns show each point somewhere along the path from its original location in the domain to its final location in the codomain, to help you get a sense of which points go where.

This 3Blue1Brown video provides some helpful animated illustrations of linear transformations:


The rank of a linear transformation from one vector space to another is the dimension of its range.

If L\left(\begin{bmatrix} x \\\ y \\\ z \end{bmatrix}\right) = \begin{bmatrix} z+y \\\ z-y \\\ 0 \end{bmatrix}, then the rank of L is 2, since its range is the xy-plane in \mathbb{R}^3.

Find the rank of the linear transformation L which maps each vector [x,y] to the closest point [a,2a] on the line y = 2x. The rank is .

Solution. The range of L is the line y = 2x, since every point in the plane maps to a point on this line, and every point on the line is the image under L of infinitely many points in the plane (all of the points on the line to y=2x through that point). Since a line is a one-dimensional vector space, the rank is \boxed{1}.

What are the ranks of the five transformations illustrated above?

  1. The rank of the rotation is
  2. The rank of the reflection is
  3. The rank of the scaling transformation is .
  4. The rank of the shearing transformation is .
  5. The rank of the projection is .

Null space

The null space of a linear transformation is the set of vectors which are mapped to the zero vector by the linear transformation.

If L\left(\begin{bmatrix} x \\\ y \\\ z \end{bmatrix}\right) = \begin{bmatrix} z+y \\\ z-y \\\ 0 \end{bmatrix}, then the null space of L is equal to \mathrm{span}\left(\left\{\begin{bmatrix} 1 \\\ 0 \\\ 0 \end{bmatrix}\right\}\right), since L(\mathbf{v}) = 0 if and only if \mathbf{v} = \begin{bmatrix} x \\\ 0 \\\ 0 \end{bmatrix} for some x\in \mathbb{R}.

Note that the range of a transformation is a subset of its , while the null space is a subset of its .

Because linear transformations respect the linear structure of a vector space, to check that two transformations from a given vector space to another are equal, it suffices to check that they map all of the vectors in a given basis of the domain to the same vectors in the codomain:

Exercise (basis equality theorem) Suppose that V and W are vector spaces and that L_1 and L_2 are linear transformations from V to W. Suppose that \mathcal{B} is a basis of V and that L_1(\mathbf{b}) = L_2(\mathbf{b}) for all \mathbf{b}\in \mathcal{B}. Show that L_1(\mathbf{v}) = L_2(\mathbf{v}) for all \mathbf{v} \in V.

Solution. Let \mathbf{v} \in V be an arbitrary vector. Since \mathcal{B} is a basis, we can find coefficients c_1, \cdots, c_{n} \in \mathbb{R} such that \mathbf{v} = c_{1}\mathbf{b}_1 + \cdots + c_{n}\mathbf{b}_n. Since L_1 and L_2 are linear, we have

\begin{align*}L_1(\mathbf{v}) &= L_{1}(c_{1}\mathbf{b}_1 + \cdots + c_{n}\mathbf{b}_n) \\\\ &= c_{1}L_{1}(\mathbf{b}_1) + \cdots + c_{n}L_{1}(\mathbf{b}_n) \\\\ &= c_{1}L_{2}(\mathbf{b}_1) + \cdots + c_{n}L_{2}(\mathbf{b}_n) \\\\ &= L_{2}(c_{1}\mathbf{b}_1 + \cdots + c_{n}\mathbf{b}_n) \\\\ &= L_{2}(\mathbf{v})\end{align*}

What is the dimension of the null space of the linear transformation L([x,y,z]) = [y,z,0]? What is the rank of L?

The dimension of the null space is and the rank is .

Solution. To find the dimension of the nullspace, let us first describe it explicitly. L(x,y,z) = (y,z,0) = 0 when y = z= 0, regardless of what x is. Thus the nullspace is \{[x,0,0] \mid x \in \mathbb{R}\}, which is just a line with basis vector [1,0,0]. Thus, the dimension of the nullspace is 1. The range of L is the xy plane, which has dimension \boxed{2}.

We call the dimension of the null space of a linear transformation the nullity of the transformation. In the previous exercise, the rank and the nullity of L add to , which is the dimension of the domain of the transformation. This is true in general: the rank plus the nullity of the zero transformation from V to W sum to the dimension of V. From there, if you modify the transformation so that it maps one fewer dimension's worth of vectors fewer to the zero vector, its rank goes up by 1 as well.

Theorem (Rank-nullity theorem) If V and W are vector spaces and L: V \to W is a linear transformation, then the rank of L and the nullity of L sum to the dimension of V.

Proof. If we any basis \{\mathbf{v}_1, \ldots \mathbf{v}_k\} of the null space of L to a basis

\begin{align*}\{\mathbf{v}_1, \ldots, \mathbf{v}_k, \mathbf{v}_{k+1}, \ldots, \mathbf{v}_n\}\end{align*}

of V, then we claim that

\begin{align*}\{L(\mathbf{v}_{k+1}), \ldots, L(\mathbf{v}_n)\}\end{align*}

is a basis for the of L.

These vectors are linearly independent because

\begin{align*} c_{k+1}L(\mathbf{v}_{k+1}) + \cdots + c_nL(\mathbf{v}_n) = \boldsymbol{0}\end{align*}


\begin{align*}L(c_{k+1}\mathbf{v}_{k+1} + \cdots + c_n\mathbf{v}_n) = \boldsymbol{0},\end{align*}

which in turn implies that c_{k+1}\mathbf{v}_{k+1} + \cdots + c_n\mathbf{v}_n is in the null space of L. Since \{\mathbf{v}_1, \ldots, \mathbf{v}_k\} spans the null space of L, this implies that c_{k+1}\mathbf{v}_{k+1} + \cdots + c_n\mathbf{v}_n is equal to the zero vector, and that in turn implies that all the weights are zero. This concludes the proof that \{L(\mathbf{v}_{k+1}), \ldots, L(\mathbf{v}_n)\} .

To see that \{L(\mathbf{v}_{k+1}), \ldots, L(\mathbf{v}_n)\} the range of L, note that if \mathbf{w} = L(\mathbf{v}) for some \mathbf{v}, then writing \mathbf{v} as a linear combination of \mathbf{v}_1, \ldots, \mathbf{v}_n, we have

\begin{align*}\mathbf{w} = L(c_1 \mathbf{v}_1 + \cdots + c_n \mathbf{v}_n) = L(c_{k+1}\mathbf{v}_{k+1} + \cdots + c_n \mathbf{v}_n),\end{align*}

by linearity of L. This shows that \mathbf{w} is in the of the vectors \{L(\mathbf{v}_{k+1}), \ldots, L(\mathbf{v}_n)\}.

Since the list \{L(\mathbf{v}_{k+1}), \ldots, L(\mathbf{v}_n)\} spans the range of L and is linearly independent, it is a of the range of L. Therefore, the dimension of the range of L is n-k, and the rank of L plus the nullity of L is (n-k)+k = n.

Suppose you're designing an app that recommends cars. For every person in your database, you have collected twenty values: age, height, gender, income, credit score, etc. In your warehouse are ten types of cars. You envision your recommendation system as a linear transformation T: \mathbb{R}^{20} \to \mathbb{R}^{10} that takes in a person's data and then returns a number for each car, reflecting how well that car fits their needs. The rank of T can be as high as ten, which we might summarize by saying that your recommendation system can have ten degrees of complexity.

After some time, you find that storing all twenty variables takes up too much space in your database. Instead, you decide to take those twenty variables and apply a linear aggregate score function S: \mathbb{R}^{20} \to \mathbb{R}^{3}, with the three output components corresponding to health, personality, and finances. You also compute a linear map R: \mathbb{R}^{3} \to \mathbb{R}^{10} that takes in these three aggregate scores and returns a vector of recommendation values. The total recommendation system is the composition R \circ S: \mathbb{R}^{20} \to \mathbb{R}^{10}. What is the maximum possible rank of R \circ S? What does this mean for the complexity of this recommendation system?

Solution. The image of the transformation R \circ S: \mathbb{R}^{20} \to \mathbb{R}^{10} is contained in the image of the transformation R:\mathbb{R}^{3} \to \mathbb{R}^{10}. As a result, the rank of R \circ S is at most the rank of S, which is at most three. By reducing your twenty basic variables to three combined scores, your recommendation system only has three degrees of freedom, and can therefore only distinguish customers along three axes.

Bruno Bruno