Home » Math basics » Linear algebra » A geometric interpretation of the covariance matrix

A geometric interpretation of the covariance matrix


In this article, we provide an intuitive, geometric interpretation of the covariance matrix, by exploring the relation between linear transformations and the resulting data covariance. Most textbooks explain the shape of data based on the concept of covariance matrices. Instead, we take a backwards approach and explain the concept of covariance matrices based on the shape of data.

In a previous article, we discussed the concept of variance, and provided a derivation and proof of the well known formula to estimate the sample variance. Figure 1 was used in this article to show that the standard deviation, as the square root of the variance, provides a measure of how much the data is spread across the feature space.

Normal distribution

Figure 1. Gaussian density function. For normally distributed data, 68% of the samples fall within the interval defined by the mean plus and minus the standard deviation.

We showed that an unbiased estimator of the sample variance can be obtained by:

(1)   \begin{align*} \sigma_x^2 &= \frac{1}{N-1} \sum_{i=1}^N (x_i - \mu)^2\\ &= \mathbb{E}[ (x - \mathbb{E}(x)) (x - \mathbb{E}(x))]\\ &= \sigma(x,x) \end{align*}

However, variance can only be used to explain the spread of the data in the directions parallel to the axes of the feature space. Consider the 2D feature space shown by figure 2:

Data with a positive covariance

Figure 2. The diagnoal spread of the data is captured by the covariance.

For this data, we could calculate the variance \sigma(x,x) in the x-direction and the variance \sigma(y,y) in the y-direction. However, the horizontal spread and the vertical spread of the data does not explain the clear diagonal correlation. Figure 2 clearly shows that on average, if the x-value of a data point increases, then also the y-value increases, resulting in a positive correlation. This correlation can be captured by extending the notion of variance to what is called the ‘covariance’ of the data:

(2)   \begin{equation*} \sigma(x,y) = \mathbb{E}[ (x - \mathbb{E}(x)) (y - \mathbb{E}(y))] \end{equation*}

For 2D data, we thus obtain \sigma(x,x), \sigma(y,y), \sigma(x,y) and \sigma(y,x). These four values can be summarized in a matrix, called the covariance matrix:

(3)   \begin{equation*} \Sigma = \begin{bmatrix} \sigma(x,x) & \sigma(x,y) \\[0.3em] \sigma(y,x) & \sigma(y,y) \\[0.3em] \end{bmatrix} \end{equation*}

If x is positively correlated with y, y is also positively correlated with x. In other words, we can state that \sigma(x,y) = \sigma(y,x). Therefore, the covariance matrix is always a symmetric matrix with the variances on its diagonal and the covariances off-diagonal. Two-dimensional normally distributed data is explained completely by its mean and its 2\times 2 covariance matrix. Similarly, a 3 \times 3 covariance matrix is used to capture the spread of three-dimensional data, and a N \times N covariance matrix captures the spread of N-dimensional data.

Figure 3 illustrates how the overall shape of the data defines the covariance matrix:

The spread of the data is defined by its covariance matrix

Figure 3. The covariance matrix defines the shape of the data. Diagonal spread is captured by the covariance, while axis-aligned spread is captured by the variance.

Now let’s forget about covariance matrices for a moment. Each of the examples in figure 3 can simply be considered to be a linearly transformed instance of figure 4:

White data

Figure 4. Data with unit covariance matrix is called white data.

Let the data shown by figure 4 be D, then each of the examples shown by figure 3 can be obtained by linearly transforming D:

(4)   \begin{equation*} D' = T \, D \end{equation*}

where T is a transformation matrix consisting of a rotation matrix R and a scaling matrix S:

(5)   \begin{equation*} T = R \, S. \end{equation*}

These matrices are defined as:

(6)   \begin{equation*} R = \begin{bmatrix} \cos(\theta) & -\sin(\theta) \\[0.3em] \sin(\theta) & \cos(\theta) \end{bmatrix} \end{equation*}

where \theta is the rotation angle, and:

(7)   \begin{equation*} S = \begin{bmatrix} s_x & 0 \\[0.3em] 0 & s_y \end{bmatrix} \end{equation*}

where s_x and s_y are the scaling factors in the x direction and the y direction respectively.

In the following section, we will discuss the relation between the covariance matrix \Sigma, and the linear transformation matrix T = R\, S.

Covariance matrix as a linear transformation

Let’s start with unscaled (scale equals 1) and unrotated data. In statistics this is often refered to as ‘white data’ because its samples are drawn from a standard normal distribution and therefore correspond to white (uncorrelated) noise:

Whitened data

Figure 5. White data is data with a unit covariance matrix.

The covariance matrix of this ‘white’ data equals the identity matrix, such that the variances and standard deviations equal 1 and the covariance equals zero:

(8)   \begin{equation*} \Sigma = \begin{bmatrix} \sigma_x^2 & 0 \\[0.3em] 0 & \sigma_y^2 \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 \\[0.3em] 0 & 1 \\ \end{bmatrix} \end{equation*}

Now let’s scale the data in the x-direction with a factor 4:

(9)   \begin{equation*} D' = \begin{bmatrix} 4 & 0 \\[0.3em] 0 & 1 \\ \end{bmatrix} \, D \end{equation*}

The data D' now looks as follows:

Data with variance in the x-direction

Figure 6. Variance in the x-direction results in a horizontal scaling.

The covariance matrix \Sigma' of D' is now:

(10)   \begin{equation*} \Sigma' = \begin{bmatrix} \sigma_x^2 & 0 \\[0.3em] 0 & \sigma_y^2 \\ \end{bmatrix} = \begin{bmatrix} 16 & 0 \\[0.3em] 0 & 1 \\ \end{bmatrix} \end{equation*}

Thus, the covariance matrix \Sigma' of the resulting data D' is related to the linear transformation T that is applied to the original data as follows: D' = T \, D, where

(11)   \begin{equation*} T = \sqrt{\Sigma'} = \begin{bmatrix} 4 & 0 \\[0.3em] 0 & 1 \\ \end{bmatrix}. \end{equation*}

However, although equation (11) holds when the data is scaled in the x and y direction, the question rises if it also holds when a rotation is applied. To investigate the relation between the linear transformation matrix T and the covariance matrix \Sigma' in the general case, we will therefore try to decompose the covariance matrix into the product of rotation and scaling matrices.

In an earlier article we saw that a linear transformation matrix T is completely defined by its eigenvectors and eigenvalues. Applied to the covariance matrix, this means that:

(12)   \begin{equation*}  \Sigma \vec{v} = \lambda \vec{v} \end{equation*}

where \vec{v} is an eigenvector of \Sigma, and \lambda is the corresponding eigenvalue.

Since the eigenvalues \lambda are scalars, when thinking about them as linear transformations they can only represent a scaling of \vec{v}. Therefore, a first important conclusion is that the eigenvalues of the covariance matrix represent the spread of the data in the direction of its largest variance. In other words; the eigenvectors of the covariance matrix always point in the direction of the largest variance of the data. This observation forms the base of Principal Component Analysis and is illustrated by figure 7.

Visualization of the covariance matrix

Figure 7. Visualization of the covariance matrix

The largest eigenvector of the covariance matrix, shown in green, points in the direction of the largest variance of the original data. The second eigenvector, shown in magenta, is always orthogonal to the first. The eigenvalues represent the size of the arrows and thus correspond to the magnitude of the spread in these directions. The covariance matrix represents the horizontal and vertical spread of the data by its (diagonal) variance components, and the rotation angle by its (off-diagonal) covariance components. If the data would not have been rotated, then the eigenvectors would be axis-aligned, the covariance would be zero, and the variances would directly relate to the eigenvalues.

A second important conclusion that can be drawn from equation (12), is the fact that the covariance matrix can be seen as a linear transformation matrix that maps its eigenvectors upon a scaled version of itself, where the scale corresponds to the eigenvalues. In the following paragraphs, we will show why these two conclusions are true, and how we can relate arbitrary linear transformations on our original data to the covariance matrix of the resulting data.

Equation (12) holds for each eigenvector-eigenvalue pair of matrix \Sigma. In the 2D case, we obtain two eigenvectors and two eigenvalues. The system of two equations defined by equation (12) can be represented efficiently using matrix notation:

(13)   \begin{equation*}  \Sigma \, V = V \, L \end{equation*}

where V is the matrix whose columns are the eigenvectors of \Sigma and L is the diagonal matrix whose non-zero elements are the corresponding eigenvalues.

This means that we can represent the covariance matrix as a function of its eigenvectors and eigenvalues:

(14)   \begin{equation*}  \Sigma = V \, L \, V^{-1} \end{equation*}

Equation (14) is called the eigendecomposition of the covariance matrix and can be obtained using a Singular Value Decomposition algorithm. Whereas the eigenvectors represent the directions of the largest variance of the data, the eigenvalues represent the magnitude of this variance in those directions. In other words, V represents a rotation matrix, while \sqrt{L} represents a scaling matrix. The covariance matrix can thus be decomposed further as:

(15)   \begin{equation*}  \Sigma = R \, S \, S \, R^{-1} \end{equation*}

where R=V is a rotation matrix and S=\sqrt{L} is a scaling matrix.

In equation (5) we defined a linear transformation T=R \, S. Since S is a diagonal scaling matrix, S = S^{\intercal}. Furthermore, since R is an orthogonal matrix, R^{-1} = R^{\intercal}. Therefore, T^{\intercal} = (R \, S)^{\intercal} = S^{\intercal} \, R^{\intercal} = S \, R^{-1}. The covariance matrix can thus be written as:

(16)   \begin{equation*}  \Sigma = R \, S \, S \, R^{-1} = T \, T^{\intercal}, \end{equation*}

In other words, if we apply the linear transformation defined by T=R \, S to the original white data D shown by figure 5, we obtain the rotated and scaled data D' with covariance matrix T \, T^{\intercal} = \Sigma' = R \, S \, S \, R^{-1}. This is illustrated by figure 8:

The covariance matrix represents a linear transformation of the original data

Figure 8. The covariance matrix represents a linear transformation of the original data.

The colored arrows in figure 8 represent the eigenvectors. The largest eigenvector, i.e. the eigenvector with the largest corresponding eigenvalue, always points in the direction of the largest variance of the data and thereby defines its orientation. Subsequent eigenvectors are always orthogonal to the largest eigenvector due to the orthogonality of rotation matrices.


In this article we showed that the covariance matrix of observed data is directly related to a linear transformation of white, uncorrelated data. This linear transformation is completely defined by the eigenvectors and eigenvalues of the data. While the eigenvectors represent the rotation matrix, the eigenvalues correspond to the square of the scaling factor in each dimension.

Article Name
A geometric interpretation of the covariance matrix
In this article, we provide an intuitive, geometric interpretation of the covariance matrix, by exploring the relation between linear transformations and the resulting data covariance.


  1. Chris says:

    Great article thank you

  2. Alex says:

    The covariance matrix is symmetric. Hence we can find a basis of orthonormal eigenvectors and then $\Sigma=VL V^T$.
    From computational point of view it is much simpler to find $V^T$ than $V^{-1}$.

    • Very true, Alex, and thanks for your comment! This is also written in the article: “Furthermore, since R is an orthogonal matrix, R^{-1} = R^T”. But you are right that I only mention this near the end of the article, mostly because it is easier to develop an intuitive understanding of the first part of the article by considering R^{-1} instead of R^T.

  3. Brian says:

    Great post! I had a couple questions:
    1) The data D doesn’t need to be Gaussian does it?
    2) Is [9] reversed (should D be on the left)?

    • Hi Brian:
      1) Indeed the data D does not need to be Gaussian for the theory to hold, I should probably have made that more clear in the article. However, talking about covariance matrices often does not have much meaning in highly non-Gaussian data.

      2) That depends on whether D is a row vector or a column vector I suppose. In this case, if each column of D is a data entry, then R*D = (D^t*R)^t

  4. Konstantin says:

    Thank you for this great post! But let me please correct one fundamental mistake that you made. The square root of covariance matrix M is not equal to R * S. The square root of M equals R * S * R’, where R’ is transposed R. Proof: (R * S * R’) * (R * S * R’) = R * S * R’ * R * S * R’ = R * S * S * R’ = T * T’ = M. And, of course, T is not a symmetric matrix (in your post T = T’, which is wrong).

    • Thanks a lot for noticing! You are right indeed, I will get back about this soon (don’t really have time right now).

      Edit: I just fixed this mistake. Sorry for the long delay, I didn’t find the time before. Thanks a lot for your feedback!

  5. srinivas kumar says:

    Very Useful Article :) What I feel needs to be included is the interpretation of the action of the covariance matrix as a linear operator. For example, the eigen vectors of the covariance matrix form the principal components in PCA. So, basically , the covariance matrix takes an input data point ( vector ) and if it resembles the data points from which the operator was obtained, it keeps it invariant ( upto scaling ). Is there a better way to interpret the eigenvectors of covariance matrix ?

Comments are very welcome!