<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Computer vision for dummies &#187; linear transformation</title>
	<atom:link href="https://www.visiondummy.com/tag/linear-transformation/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.visiondummy.com</link>
	<description>A blog about intelligent algorithms, machine learning, computer vision, datamining and more.</description>
	<lastBuildDate>Tue, 04 May 2021 14:17:31 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.8.39</generator>
	<item>
		<title>A geometric interpretation of the covariance matrix</title>
		<link>https://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/</link>
		<comments>https://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/#comments</comments>
		<pubDate>Thu, 24 Apr 2014 11:09:38 +0000</pubDate>
		<dc:creator><![CDATA[Vincent Spruyt]]></dc:creator>
				<category><![CDATA[Linear algebra]]></category>
		<category><![CDATA[covariance matrix]]></category>
		<category><![CDATA[eigendecomposition]]></category>
		<category><![CDATA[Eigenvectors]]></category>
		<category><![CDATA[linear transformation]]></category>
		<category><![CDATA[PCA]]></category>

		<guid isPermaLink="false">http://www.visiondummy.com/?p=440</guid>
		<description><![CDATA[<p>In this article, we provide an intuitive, geometric interpretation of the covariance matrix, by exploring the relation between linear transformations and the resulting data covariance. Most textbooks explain the shape of data based on the concept of covariance matrices. Instead, we take a backwards approach and explain the concept of covariance matrices based on the [...]</p>
<p>The post <a rel="nofollow" href="https://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/">A geometric interpretation of the covariance matrix</a> appeared first on <a rel="nofollow" href="https://www.visiondummy.com">Computer vision for dummies</a>.</p>
]]></description>
				<content:encoded><![CDATA[<h2>Introduction</h2>
<p>In this article, we provide an intuitive, geometric interpretation of the covariance matrix, by exploring the relation between linear transformations and the resulting data covariance. Most textbooks explain the shape of data based on the concept of covariance matrices. Instead, we take a backwards approach and explain the concept of covariance matrices based on the shape of data.</p>
<div id="amzn-assoc-ad-b3cb92d7-7679-4d6e-9628-99f6459b00ca"></div>
<p><script async src="//z-na.amazon-adsystem.com/widgets/onejs?MarketPlace=US&#038;adInstanceId=b3cb92d7-7679-4d6e-9628-99f6459b00ca"></script><br />
In a previous article, we discussed the concept of <a title="Why divide the sample variance by N-1?" href="http://www.visiondummy.com/2014/03/divide-variance-n-1/" target="_blank">variance</a>, and provided a derivation and proof of the well known formula to estimate the sample variance. Figure 1 was used in this article to show that the standard deviation, as the square root of the variance, provides a measure of how much the data is spread across the feature space.</p>
<div id="attachment_213" style="width: 524px" class="wp-caption aligncenter"><a href="http://www.visiondummy.com/wp-content/uploads/2014/03/gaussiandensity.png"><img class="size-full wp-image-213 " style="margin: 0px;" title="Normal distribution" alt="Normal distribution" src="http://www.visiondummy.com/wp-content/uploads/2014/03/gaussiandensity.png" width="514" height="396" /></a><p class="wp-caption-text"><b>Figure 1.</b> Gaussian density function. For normally distributed data, 68% of the samples fall within the interval defined by the mean plus and minus the standard deviation.</p></div>
<p>We showed that an unbiased estimator of the sample variance can be obtained by:</p>
<p class="ql-center-displayed-equation" style="line-height: 129px;"><span class="ql-right-eqno"> (1) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-8511602375b6c3ba0dcf673f5fcdd8f9_l3.png" height="129" width="267" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#97;&#108;&#105;&#103;&#110;&#42;&#125; &#92;&#115;&#105;&#103;&#109;&#97;&#95;&#120;&#94;&#50;&#32;&#38;&#61;&#32;&#92;&#102;&#114;&#97;&#99;&#123;&#49;&#125;&#123;&#78;&#45;&#49;&#125;&#32;&#92;&#115;&#117;&#109;&#95;&#123;&#105;&#61;&#49;&#125;&#94;&#78;&#32;&#40;&#120;&#95;&#105;&#32;&#45;&#32;&#92;&#109;&#117;&#41;&#94;&#50;&#92;&#92; &#38;&#61;&#32;&#92;&#109;&#97;&#116;&#104;&#98;&#98;&#123;&#69;&#125;&#091;&#32;&#40;&#120;&#32;&#45;&#32;&#92;&#109;&#97;&#116;&#104;&#98;&#98;&#123;&#69;&#125;&#40;&#120;&#41;&#41;&#32;&#40;&#120;&#32;&#45;&#32;&#92;&#109;&#97;&#116;&#104;&#98;&#98;&#123;&#69;&#125;&#40;&#120;&#41;&#41;&#093;&#92;&#92; &#38;&#61;&#32;&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#44;&#120;&#41; &#92;&#101;&#110;&#100;&#123;&#97;&#108;&#105;&#103;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>However, variance can only be used to explain the spread of the data in the directions parallel to the axes of the feature space. Consider the 2D feature space shown by figure 2:</p>
<div id="attachment_390" style="width: 391px" class="wp-caption aligncenter"><a href="http://www.visiondummy.com/wp-content/uploads/2014/04/transformeddata.png"><img class="size-full wp-image-390   " style="margin: 0px;" title="Data with a positive covariance" alt="Data with a positive covariance" src="http://www.visiondummy.com/wp-content/uploads/2014/04/transformeddata.png" width="381" height="369" /></a><p class="wp-caption-text"><b>Figure 2.</b> The diagnoal spread of the data is captured by the covariance.</p></div>
<p>For this data, we could calculate the variance <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-306b80c2caf6e1ce873db826824bae77_l3.png" class="ql-img-inline-formula " alt="&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#44;&#120;&#41;" title="Rendered by QuickLaTeX.com" height="23" width="61" style="vertical-align: -6px;"/> in the x-direction and the variance <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-d0a6f8d59fd3d651e6d12aacb3804cb5_l3.png" class="ql-img-inline-formula " alt="&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#121;&#44;&#121;&#41;" title="Rendered by QuickLaTeX.com" height="23" width="59" style="vertical-align: -6px;"/> in the y-direction. However, the horizontal spread and the vertical spread of the data does not explain the clear diagonal correlation. Figure 2 clearly shows that on average, if the x-value of a data point increases, then also the y-value increases, resulting in a positive correlation. This correlation can be captured by extending the notion of variance to what is called the &#8216;covariance&#8217; of the data:</p>
<p class="ql-center-displayed-equation" style="line-height: 23px;"><span class="ql-right-eqno"> (2) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-476cbf37a8d4f3765fe0b2b58e5c8706_l3.png" height="23" width="304" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125; &#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#44;&#121;&#41;&#32;&#61;&#32;&#92;&#109;&#97;&#116;&#104;&#98;&#98;&#123;&#69;&#125;&#091;&#32;&#40;&#120;&#32;&#45;&#32;&#92;&#109;&#97;&#116;&#104;&#98;&#98;&#123;&#69;&#125;&#40;&#120;&#41;&#41;&#32;&#40;&#121;&#32;&#45;&#32;&#92;&#109;&#97;&#116;&#104;&#98;&#98;&#123;&#69;&#125;&#40;&#121;&#41;&#41;&#093; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>For 2D data, we thus obtain <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-306b80c2caf6e1ce873db826824bae77_l3.png" class="ql-img-inline-formula " alt="&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#44;&#120;&#41;" title="Rendered by QuickLaTeX.com" height="23" width="61" style="vertical-align: -6px;"/>, <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-d0a6f8d59fd3d651e6d12aacb3804cb5_l3.png" class="ql-img-inline-formula " alt="&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#121;&#44;&#121;&#41;" title="Rendered by QuickLaTeX.com" height="23" width="59" style="vertical-align: -6px;"/>, <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-88d33eb20eafcc741815d0fffe208e01_l3.png" class="ql-img-inline-formula " alt="&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#44;&#121;&#41;" title="Rendered by QuickLaTeX.com" height="23" width="60" style="vertical-align: -6px;"/> and <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-42efe14d58befabbf2f821c96ced0b4a_l3.png" class="ql-img-inline-formula " alt="&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#121;&#44;&#120;&#41;" title="Rendered by QuickLaTeX.com" height="23" width="60" style="vertical-align: -6px;"/>. These four values can be summarized in a matrix, called the covariance matrix:</p>
<p class="ql-center-displayed-equation" style="line-height: 64px;"><span class="ql-right-eqno"> (3) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-c3b2c0560068487dd51917cd55636781_l3.png" height="64" width="205" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125; &#92;&#83;&#105;&#103;&#109;&#97;&#32;&#61;&#32;&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#44;&#120;&#41;&#32;&#38;&#32;&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#44;&#121;&#41;&#32;&#92;&#92;&#091;&#48;&#46;&#51;&#101;&#109;&#093; &#92;&#115;&#105;&#103;&#109;&#97;&#40;&#121;&#44;&#120;&#41;&#32;&#38;&#32;&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#121;&#44;&#121;&#41;&#32;&#92;&#92;&#091;&#48;&#46;&#51;&#101;&#109;&#093; &#92;&#101;&#110;&#100;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>If x is positively correlated with y, y is also positively correlated with x. In other words, we can state that <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-a9f6d2d1f35bd9860e5975cd6a893877_l3.png" class="ql-img-inline-formula " alt="&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#44;&#121;&#41;&#32;&#61;&#32;&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#121;&#44;&#120;&#41;" title="Rendered by QuickLaTeX.com" height="23" width="150" style="vertical-align: -6px;"/>. Therefore, the covariance matrix is always a symmetric matrix with the variances on its diagonal and the covariances off-diagonal. Two-dimensional normally distributed data is explained completely by its mean and its <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-9550d59c0c85b85636acad265530a8ee_l3.png" class="ql-img-inline-formula " alt="&#50;&#92;&#116;&#105;&#109;&#101;&#115;&#32;&#50;" title="Rendered by QuickLaTeX.com" height="15" width="45" style="vertical-align: 0px;"/> covariance matrix. Similarly, a <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-9f4e76f38736d8026154c7113a886bc0_l3.png" class="ql-img-inline-formula " alt="&#51;&#32;&#92;&#116;&#105;&#109;&#101;&#115;&#32;&#51;" title="Rendered by QuickLaTeX.com" height="15" width="46" style="vertical-align: 0px;"/> covariance matrix is used to capture the spread of three-dimensional data, and a <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-27211e8b64d0af6bb1c7c805a18af057_l3.png" class="ql-img-inline-formula " alt="&#78;&#32;&#92;&#116;&#105;&#109;&#101;&#115;&#32;&#78;" title="Rendered by QuickLaTeX.com" height="14" width="64" style="vertical-align: 0px;"/> covariance matrix captures the spread of N-dimensional data.</p>
<p>Figure 3 illustrates how the overall shape of the data defines the covariance matrix:</p>
<div id="attachment_446" style="width: 503px" class="wp-caption aligncenter"><a href="http://www.visiondummy.com/wp-content/uploads/2014/04/covariances.png"><img class="size-full wp-image-446" style="margin: 0px;" title="The spread of the data is defined by its covariance matrix" alt="The spread of the data is defined by its covariance matrix" src="http://www.visiondummy.com/wp-content/uploads/2014/04/covariances.png" width="493" height="479" /></a><p class="wp-caption-text"><b>Figure 3.</b> The covariance matrix defines the shape of the data. Diagonal spread is captured by the covariance, while axis-aligned spread is captured by the variance.</p></div>
<h2>Eigendecomposition of a covariance matrix</h2>
<p>In the next section, we will discuss how the covariance matrix can be interpreted as a linear operator that transforms white data into the data we observed. However, before diving into the technical details, it is important to gain an intuitive understanding of how eigenvectors and eigenvalues uniquely define the covariance matrix, and therefore the shape of our data.</p>
<p>As we saw in figure 3, the covariance matrix defines both the spread (variance), and the orientation (covariance) of our data. So, if we would like to represent the covariance matrix with a vector and its magnitude, we should simply try to find the vector that points into the direction of the largest spread of the data, and whose magnitude equals the spread (variance) in this direction.</p>
<p>If we define this vector as <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-5663d3adf90e26dd70e1f371e6cd6eba_l3.png" class="ql-img-inline-formula " alt="&#92;&#118;&#101;&#99;&#123;&#118;&#125;" title="Rendered by QuickLaTeX.com" height="15" width="13" style="vertical-align: 0px;"/>, then the projection of our data <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-6fe012cfdbc6f342dbd886ff568ed4ab_l3.png" class="ql-img-inline-formula " alt="&#68;" title="Rendered by QuickLaTeX.com" height="14" width="17" style="vertical-align: 0px;"/> onto this vector is obtained as <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-ccf1bdb39d78be778899729ac16806ba_l3.png" class="ql-img-inline-formula " alt="&#92;&#118;&#101;&#99;&#123;&#118;&#125;&#94;&#123;&#92;&#105;&#110;&#116;&#101;&#114;&#99;&#97;&#108;&#125;&#32;&#68;" title="Rendered by QuickLaTeX.com" height="15" width="37" style="vertical-align: 0px;"/>, and the variance of the projected data is <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-e76086e0b82464aff045e27892d04123_l3.png" class="ql-img-inline-formula " alt="&#92;&#118;&#101;&#99;&#123;&#118;&#125;&#94;&#123;&#92;&#105;&#110;&#116;&#101;&#114;&#99;&#97;&#108;&#125;&#32;&#92;&#83;&#105;&#103;&#109;&#97;&#32;&#92;&#118;&#101;&#99;&#123;&#118;&#125;" title="Rendered by QuickLaTeX.com" height="15" width="49" style="vertical-align: 0px;"/>. Since we are looking for the vector <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-5663d3adf90e26dd70e1f371e6cd6eba_l3.png" class="ql-img-inline-formula " alt="&#92;&#118;&#101;&#99;&#123;&#118;&#125;" title="Rendered by QuickLaTeX.com" height="15" width="13" style="vertical-align: 0px;"/> that points into the direction of the largest variance, we should choose its components such that the covariance matrix <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-e76086e0b82464aff045e27892d04123_l3.png" class="ql-img-inline-formula " alt="&#92;&#118;&#101;&#99;&#123;&#118;&#125;&#94;&#123;&#92;&#105;&#110;&#116;&#101;&#114;&#99;&#97;&#108;&#125;&#32;&#92;&#83;&#105;&#103;&#109;&#97;&#32;&#92;&#118;&#101;&#99;&#123;&#118;&#125;" title="Rendered by QuickLaTeX.com" height="15" width="49" style="vertical-align: 0px;"/> of the projected data is as large as possible. Maximizing any function of the form <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-e76086e0b82464aff045e27892d04123_l3.png" class="ql-img-inline-formula " alt="&#92;&#118;&#101;&#99;&#123;&#118;&#125;&#94;&#123;&#92;&#105;&#110;&#116;&#101;&#114;&#99;&#97;&#108;&#125;&#32;&#92;&#83;&#105;&#103;&#109;&#97;&#32;&#92;&#118;&#101;&#99;&#123;&#118;&#125;" title="Rendered by QuickLaTeX.com" height="15" width="49" style="vertical-align: 0px;"/> with respect to <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-5663d3adf90e26dd70e1f371e6cd6eba_l3.png" class="ql-img-inline-formula " alt="&#92;&#118;&#101;&#99;&#123;&#118;&#125;" title="Rendered by QuickLaTeX.com" height="15" width="13" style="vertical-align: 0px;"/>, where <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-5663d3adf90e26dd70e1f371e6cd6eba_l3.png" class="ql-img-inline-formula " alt="&#92;&#118;&#101;&#99;&#123;&#118;&#125;" title="Rendered by QuickLaTeX.com" height="15" width="13" style="vertical-align: 0px;"/> is a normalized unit vector, can be formulated as a so called <a href="http://en.wikipedia.org/wiki/Rayleigh_quotient" title="Rayleigh Quotient" target="_blank">Rayleigh Quotient</a>. The maximum of such a Rayleigh Quotient is obtained by setting <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-5663d3adf90e26dd70e1f371e6cd6eba_l3.png" class="ql-img-inline-formula " alt="&#92;&#118;&#101;&#99;&#123;&#118;&#125;" title="Rendered by QuickLaTeX.com" height="15" width="13" style="vertical-align: 0px;"/> equal to the largest eigenvector of matrix <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-66f091b3d894ca4b0418d9487b6b7e8a_l3.png" class="ql-img-inline-formula " alt="&#92;&#83;&#105;&#103;&#109;&#97;" title="Rendered by QuickLaTeX.com" height="15" width="13" style="vertical-align: 0px;"/>.</p>
<p>In other words, the largest eigenvector of the covariance matrix always points into the direction of the largest variance of the data, and the magnitude of this vector equals the corresponding eigenvalue. The second largest eigenvector is always orthogonal to the largest eigenvector, and points into the direction of the second largest spread of the data.</p>
<p>Now let&#8217;s have a look at some examples. In an earlier article we saw that a linear transformation matrix <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-99bdf2edc1f86c3fa1d60f4d82513c7d_l3.png" class="ql-img-inline-formula " alt="&#84;" title="Rendered by QuickLaTeX.com" height="14" width="15" style="vertical-align: 0px;"/> is completely defined by its <a title="What are eigenvectors and eigenvalues?" href="http://www.visiondummy.com/2014/03/eigenvalues-eigenvectors/" target="_blank">eigenvectors and eigenvalues</a>. Applied to the covariance matrix, this means that:<br />
<a name="id3483335494"></a>
<p class="ql-center-displayed-equation" style="line-height: 15px;"><span class="ql-right-eqno"> (4) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-a17919125852783f2014314d7368316e_l3.png" height="15" width="79" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;&#32; &#92;&#83;&#105;&#103;&#109;&#97;&#32;&#92;&#118;&#101;&#99;&#123;&#118;&#125;&#32;&#61;&#32;&#92;&#108;&#97;&#109;&#98;&#100;&#97;&#32;&#92;&#118;&#101;&#99;&#123;&#118;&#125; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>where <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-5663d3adf90e26dd70e1f371e6cd6eba_l3.png" class="ql-img-inline-formula " alt="&#92;&#118;&#101;&#99;&#123;&#118;&#125;" title="Rendered by QuickLaTeX.com" height="15" width="13" style="vertical-align: 0px;"/> is an eigenvector of <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-66f091b3d894ca4b0418d9487b6b7e8a_l3.png" class="ql-img-inline-formula " alt="&#92;&#83;&#105;&#103;&#109;&#97;" title="Rendered by QuickLaTeX.com" height="15" width="13" style="vertical-align: 0px;"/>, and <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-50bc2c4701f0a0dd472fdd7dad5c47d9_l3.png" class="ql-img-inline-formula " alt="&#92;&#108;&#97;&#109;&#98;&#100;&#97;" title="Rendered by QuickLaTeX.com" height="14" width="11" style="vertical-align: 0px;"/> is the corresponding eigenvalue.</p>
<p>If the covariance matrix of our data is a diagonal matrix, such that the covariances are zero, then this means that the variances must be equal to the eigenvalues <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-50bc2c4701f0a0dd472fdd7dad5c47d9_l3.png" class="ql-img-inline-formula " alt="&#92;&#108;&#97;&#109;&#98;&#100;&#97;" title="Rendered by QuickLaTeX.com" height="14" width="11" style="vertical-align: 0px;"/>. This is illustrated by figure 4, where the eigenvectors are shown in green and magenta, and where the eigenvalues clearly equal the variance components of the covariance matrix.</p>
<div id="attachment_603" style="width: 810px" class="wp-caption aligncenter"><a href="http://www.visiondummy.com/wp-content/uploads/2014/04/eigenvectors.png"><img src="http://www.visiondummy.com/wp-content/uploads/2014/04/eigenvectors.png" alt="Eigenvectors of a covariance matrix" width="800" height="383" class="size-full wp-image-603" /></a><p class="wp-caption-text"><b>Figure 4.</b> Eigenvectors of a covariance matrix</p></div>
<p>However, if the covariance matrix is not diagonal, such that the covariances are not zero, then the situation is a little more complicated. The eigenvalues still represent the variance magnitude in the direction of the largest spread of the data, and the variance components of the covariance matrix still represent the variance magnitude in the direction of the x-axis and y-axis. But since the data is not axis aligned, these values are not the same anymore as shown by figure 5.</p>
<div id="attachment_604" style="width: 810px" class="wp-caption aligncenter"><a href="http://www.visiondummy.com/wp-content/uploads/2014/04/eigenvectors_covariance.png"><img src="http://www.visiondummy.com/wp-content/uploads/2014/04/eigenvectors_covariance.png" alt="Eigenvectors with covariance" width="800" height="382" class="size-full wp-image-604" /></a><p class="wp-caption-text"><b>Figure 5.</b> Eigenvalues versus variance</p></div>
<p>By comparing figure 5 with figure 4, it becomes clear that the eigenvalues represent the variance of the data along the eigenvector directions, whereas the variance components of the covariance matrix represent the spread along the axes. If there are no covariances, then both values are equal.</p>
<h2>Covariance matrix as a linear transformation</h2>
<p>Now let&#8217;s forget about covariance matrices for a moment. Each of the examples in figure 3 can simply be considered to be a linearly transformed instance of figure 6:</p>
<div id="attachment_447" style="width: 391px" class="wp-caption aligncenter"><a href="http://www.visiondummy.com/wp-content/uploads/2014/04/whiteneddata.png"><img class="size-full wp-image-447" style="margin: 0px;" title="White data" alt="White data" src="http://www.visiondummy.com/wp-content/uploads/2014/04/whiteneddata.png" width="381" height="369" /></a><p class="wp-caption-text"><b>Figure 6.</b> Data with unit covariance matrix is called white data.</p></div>
<p>Let the data shown by figure 6 be <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-6fe012cfdbc6f342dbd886ff568ed4ab_l3.png" class="ql-img-inline-formula " alt="&#68;" title="Rendered by QuickLaTeX.com" height="14" width="17" style="vertical-align: 0px;"/>, then each of the examples shown by figure 3 can be obtained by linearly transforming <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-6fe012cfdbc6f342dbd886ff568ed4ab_l3.png" class="ql-img-inline-formula " alt="&#68;" title="Rendered by QuickLaTeX.com" height="14" width="17" style="vertical-align: 0px;"/>:</p>
<p class="ql-center-displayed-equation" style="line-height: 18px;"><span class="ql-right-eqno"> (5) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-7aecb171a514b3c704f078ec86182805_l3.png" height="18" width="87" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125; &#68;&#39;&#32;&#61;&#32;&#84;&#32;&#92;&#44;&#32;&#68; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>where <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-99bdf2edc1f86c3fa1d60f4d82513c7d_l3.png" class="ql-img-inline-formula " alt="&#84;" title="Rendered by QuickLaTeX.com" height="14" width="15" style="vertical-align: 0px;"/> is a transformation matrix consisting of a rotation matrix <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-026035461a80f8e10b18e494d1116782_l3.png" class="ql-img-inline-formula " alt="&#82;" title="Rendered by QuickLaTeX.com" height="14" width="16" style="vertical-align: 0px;"/> and a scaling matrix <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-7f83dd23b1b356198dd90895630ebcef_l3.png" class="ql-img-inline-formula " alt="&#83;" title="Rendered by QuickLaTeX.com" height="14" width="13" style="vertical-align: 0px;"/>:<br />
<a name="id1585768567"></a>
<p class="ql-center-displayed-equation" style="line-height: 14px;"><span class="ql-right-eqno"> (6) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-2481ecd212935a8cc503131bf2596bf6_l3.png" height="14" width="81" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125; &#84;&#32;&#61;&#32;&#82;&#32;&#92;&#44;&#32;&#83;&#46; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>These matrices are defined as:</p>
<p class="ql-center-displayed-equation" style="line-height: 64px;"><span class="ql-right-eqno"> (7) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-78bf053271a867c2d5b7c2b30d3e7924_l3.png" height="64" width="211" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125; &#82;&#32;&#61;&#32;&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#92;&#99;&#111;&#115;&#40;&#92;&#116;&#104;&#101;&#116;&#97;&#41;&#32;&#38;&#32;&#45;&#92;&#115;&#105;&#110;&#40;&#92;&#116;&#104;&#101;&#116;&#97;&#41;&#32;&#92;&#92;&#091;&#48;&#46;&#51;&#101;&#109;&#093; &#92;&#115;&#105;&#110;&#40;&#92;&#116;&#104;&#101;&#116;&#97;&#41;&#32;&#38;&#32;&#92;&#99;&#111;&#115;&#40;&#92;&#116;&#104;&#101;&#116;&#97;&#41; &#92;&#101;&#110;&#100;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>where <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-a633c6dcc2aba17ef85b129e4fbcaf98_l3.png" class="ql-img-inline-formula " alt="&#92;&#116;&#104;&#101;&#116;&#97;" title="Rendered by QuickLaTeX.com" height="14" width="10" style="vertical-align: 0px;"/> is the rotation angle, and:</p>
<p class="ql-center-displayed-equation" style="line-height: 64px;"><span class="ql-right-eqno"> (8) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-0756bebe1440213107fea1005e1a655b_l3.png" height="64" width="120" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125; &#83;&#32;&#61;&#32;&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#115;&#95;&#120;&#32;&#38;&#32;&#48;&#32;&#92;&#92;&#091;&#48;&#46;&#51;&#101;&#109;&#093; &#48;&#32;&#38;&#32;&#115;&#95;&#121; &#92;&#101;&#110;&#100;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>where <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-197e94159cb0b049505c16b6448e224c_l3.png" class="ql-img-inline-formula " alt="&#115;&#95;&#120;" title="Rendered by QuickLaTeX.com" height="12" width="18" style="vertical-align: -3px;"/> and <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-1be2a4d3326735aa17afcfc4d6409278_l3.png" class="ql-img-inline-formula " alt="&#115;&#95;&#121;" title="Rendered by QuickLaTeX.com" height="15" width="18" style="vertical-align: -6px;"/> are the scaling factors in the x direction and the y direction respectively.</p>
<p>In the following paragraphs, we will discuss the relation between the covariance matrix <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-66f091b3d894ca4b0418d9487b6b7e8a_l3.png" class="ql-img-inline-formula " alt="&#92;&#83;&#105;&#103;&#109;&#97;" title="Rendered by QuickLaTeX.com" height="15" width="13" style="vertical-align: 0px;"/>, and the linear transformation matrix <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-b40eb258e9e321e3d2262a5afffcc8bb_l3.png" class="ql-img-inline-formula " alt="&#84;&#32;&#61;&#32;&#82;&#92;&#44;&#32;&#83;" title="Rendered by QuickLaTeX.com" height="14" width="77" style="vertical-align: 0px;"/>.</p>
<p>Let&#8217;s start with unscaled (scale equals 1) and unrotated data. In statistics this is often refered to as &#8216;white data&#8217; because its samples are drawn from a standard normal distribution and therefore correspond to white (uncorrelated) noise:</p>
<div id="attachment_394" style="width: 391px" class="wp-caption aligncenter"><a href="http://www.visiondummy.com/wp-content/uploads/2014/04/whiteneddata.png"><img class="size-full wp-image-394 " style="margin: 0px;" title="Whitened data" alt="Whitened data" src="http://www.visiondummy.com/wp-content/uploads/2014/04/whiteneddata.png" width="381" height="369" /></a><p class="wp-caption-text"><b>Figure 7.</b> White data is data with a unit covariance matrix.</p></div>
<p>The covariance matrix of this &#8216;white&#8217; data equals the identity matrix, such that the variances and standard deviations equal 1 and the covariance equals zero:</p>
<p class="ql-center-displayed-equation" style="line-height: 64px;"><span class="ql-right-eqno"> (9) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-22cfcbfd49a80711b48bee89d0ac5e9e_l3.png" height="64" width="218" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125; &#92;&#83;&#105;&#103;&#109;&#97;&#32;&#61;&#32;&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#92;&#115;&#105;&#103;&#109;&#97;&#95;&#120;&#94;&#50;&#32;&#38;&#32;&#48;&#32;&#92;&#92;&#091;&#48;&#46;&#51;&#101;&#109;&#093; &#48;&#32;&#38;&#32;&#92;&#115;&#105;&#103;&#109;&#97;&#95;&#121;&#94;&#50;&#32;&#92;&#92; &#92;&#101;&#110;&#100;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125;&#32;&#61;&#32;&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#49;&#32;&#38;&#32;&#48;&#32;&#92;&#92;&#091;&#48;&#46;&#51;&#101;&#109;&#093; &#48;&#32;&#38;&#32;&#49;&#32;&#92;&#92; &#92;&#101;&#110;&#100;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>Now let&#8217;s scale the data in the x-direction with a factor 4:</p>
<p class="ql-center-displayed-equation" style="line-height: 64px;"><span class="ql-right-eqno"> (10) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-93925ded582a8e859f4efd17c75d7dc9_l3.png" height="64" width="141" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125; &#68;&#39;&#32;&#61;&#32;&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#52;&#32;&#38;&#32;&#48;&#32;&#92;&#92;&#091;&#48;&#46;&#51;&#101;&#109;&#093; &#48;&#32;&#38;&#32;&#49;&#32;&#92;&#92; &#92;&#101;&#110;&#100;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125;&#32;&#92;&#44;&#32;&#68; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>The data <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-69e87a5558d2fcd98b5a9d1292a4345e_l3.png" class="ql-img-inline-formula " alt="&#68;&#39;" title="Rendered by QuickLaTeX.com" height="17" width="23" style="vertical-align: 0px;"/> now looks as follows:</p>
<div id="attachment_400" style="width: 391px" class="wp-caption aligncenter"><a href="http://www.visiondummy.com/wp-content/uploads/2014/04/stretcheddata.png"><img class="size-full wp-image-400" style="margin: 0px;" title="Data with variance in the x-direction" alt="Data with variance in the x-direction" src="http://www.visiondummy.com/wp-content/uploads/2014/04/stretcheddata.png" width="381" height="369" /></a><p class="wp-caption-text"><b>Figure 8.</b> Variance in the x-direction results in a horizontal scaling.</p></div>
<p>The covariance matrix <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-6a91d339ba236a991b48b26135dd4246_l3.png" class="ql-img-inline-formula " alt="&#92;&#83;&#105;&#103;&#109;&#97;&#39;" title="Rendered by QuickLaTeX.com" height="17" width="19" style="vertical-align: 0px;"/> of <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-69e87a5558d2fcd98b5a9d1292a4345e_l3.png" class="ql-img-inline-formula " alt="&#68;&#39;" title="Rendered by QuickLaTeX.com" height="17" width="23" style="vertical-align: 0px;"/> is now:</p>
<p class="ql-center-displayed-equation" style="line-height: 64px;"><span class="ql-right-eqno"> (11) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-b17970f14e6400c5fc20c4b9c069abfd_l3.png" height="64" width="234" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125; &#92;&#83;&#105;&#103;&#109;&#97;&#39;&#32;&#61;&#32;&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#92;&#115;&#105;&#103;&#109;&#97;&#95;&#120;&#94;&#50;&#32;&#38;&#32;&#48;&#32;&#92;&#92;&#091;&#48;&#46;&#51;&#101;&#109;&#093; &#48;&#32;&#38;&#32;&#92;&#115;&#105;&#103;&#109;&#97;&#95;&#121;&#94;&#50;&#32;&#92;&#92; &#92;&#101;&#110;&#100;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125;&#32;&#61;&#32;&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#49;&#54;&#32;&#38;&#32;&#48;&#32;&#92;&#92;&#091;&#48;&#46;&#51;&#101;&#109;&#093; &#48;&#32;&#38;&#32;&#49;&#32;&#92;&#92; &#92;&#101;&#110;&#100;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>Thus, the covariance matrix <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-6a91d339ba236a991b48b26135dd4246_l3.png" class="ql-img-inline-formula " alt="&#92;&#83;&#105;&#103;&#109;&#97;&#39;" title="Rendered by QuickLaTeX.com" height="17" width="19" style="vertical-align: 0px;"/> of the resulting data <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-69e87a5558d2fcd98b5a9d1292a4345e_l3.png" class="ql-img-inline-formula " alt="&#68;&#39;" title="Rendered by QuickLaTeX.com" height="17" width="23" style="vertical-align: 0px;"/> is related to the linear transformation <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-99bdf2edc1f86c3fa1d60f4d82513c7d_l3.png" class="ql-img-inline-formula " alt="&#84;" title="Rendered by QuickLaTeX.com" height="14" width="15" style="vertical-align: 0px;"/> that is applied to the original data as follows: <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-9bd559b313e798679ab85e7718dea765_l3.png" class="ql-img-inline-formula " alt="&#68;&#39;&#32;&#61;&#32;&#84;&#32;&#92;&#44;&#32;&#68;" title="Rendered by QuickLaTeX.com" height="17" width="87" style="vertical-align: 0px;"/>, where<br />
<a name="id537686066"></a>
<p class="ql-center-displayed-equation" style="line-height: 64px;"><span class="ql-right-eqno"> (12) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-51df1544156ec5782e7799b4782b029b_l3.png" height="64" width="183" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125; &#84;&#32;&#61;&#32;&#92;&#115;&#113;&#114;&#116;&#123;&#92;&#83;&#105;&#103;&#109;&#97;&#39;&#125;&#32;&#61;&#32;&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125; &#52;&#32;&#38;&#32;&#48;&#32;&#92;&#92;&#091;&#48;&#46;&#51;&#101;&#109;&#093; &#48;&#32;&#38;&#32;&#49;&#32;&#92;&#92; &#92;&#101;&#110;&#100;&#123;&#98;&#109;&#97;&#116;&#114;&#105;&#120;&#125;&#46; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>However, although equation (<a href="#id537686066">12</a>) holds when the data is scaled in the x and y direction, the question rises if it also holds when a rotation is applied. To investigate the relation between the linear transformation matrix <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-99bdf2edc1f86c3fa1d60f4d82513c7d_l3.png" class="ql-img-inline-formula " alt="&#84;" title="Rendered by QuickLaTeX.com" height="14" width="15" style="vertical-align: 0px;"/> and the covariance matrix <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-6a91d339ba236a991b48b26135dd4246_l3.png" class="ql-img-inline-formula " alt="&#92;&#83;&#105;&#103;&#109;&#97;&#39;" title="Rendered by QuickLaTeX.com" height="17" width="19" style="vertical-align: 0px;"/> in the general case, we will therefore try to decompose the covariance matrix into the product of rotation and scaling matrices.</p>
<p>As we saw earlier, we can represent the covariance matrix by its eigenvectors and eigenvalues:<br />
<a name="id3483335494"></a>
<p class="ql-center-displayed-equation" style="line-height: 15px;"><span class="ql-right-eqno"> (13) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-a17919125852783f2014314d7368316e_l3.png" height="15" width="79" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;&#32; &#92;&#83;&#105;&#103;&#109;&#97;&#32;&#92;&#118;&#101;&#99;&#123;&#118;&#125;&#32;&#61;&#32;&#92;&#108;&#97;&#109;&#98;&#100;&#97;&#32;&#92;&#118;&#101;&#99;&#123;&#118;&#125; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>where <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-5663d3adf90e26dd70e1f371e6cd6eba_l3.png" class="ql-img-inline-formula " alt="&#92;&#118;&#101;&#99;&#123;&#118;&#125;" title="Rendered by QuickLaTeX.com" height="15" width="13" style="vertical-align: 0px;"/> is an eigenvector of <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-66f091b3d894ca4b0418d9487b6b7e8a_l3.png" class="ql-img-inline-formula " alt="&#92;&#83;&#105;&#103;&#109;&#97;" title="Rendered by QuickLaTeX.com" height="15" width="13" style="vertical-align: 0px;"/>, and <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-50bc2c4701f0a0dd472fdd7dad5c47d9_l3.png" class="ql-img-inline-formula " alt="&#92;&#108;&#97;&#109;&#98;&#100;&#97;" title="Rendered by QuickLaTeX.com" height="14" width="11" style="vertical-align: 0px;"/> is the corresponding eigenvalue.</p>
<p>Equation (<a href="#id3483335494">13</a>) holds for each eigenvector-eigenvalue pair of matrix <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-66f091b3d894ca4b0418d9487b6b7e8a_l3.png" class="ql-img-inline-formula " alt="&#92;&#83;&#105;&#103;&#109;&#97;" title="Rendered by QuickLaTeX.com" height="15" width="13" style="vertical-align: 0px;"/>. In the 2D case, we obtain two eigenvectors and two eigenvalues. The system of two equations defined by equation (<a href="#id3483335494">13</a>) can be represented efficiently using matrix notation:<br />
<a name="id1495159919"></a>
<p class="ql-center-displayed-equation" style="line-height: 15px;"><span class="ql-right-eqno"> (14) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-f6fdbf4f1af6863c9afc04f7418fdc6f_l3.png" height="15" width="97" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;&#32; &#92;&#83;&#105;&#103;&#109;&#97;&#32;&#92;&#44;&#32;&#86;&#32;&#61;&#32;&#86;&#32;&#92;&#44;&#32;&#76; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>where <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-1f1ceff6690e6ea05bc7802220277816_l3.png" class="ql-img-inline-formula " alt="&#86;" title="Rendered by QuickLaTeX.com" height="14" width="16" style="vertical-align: 0px;"/> is the matrix whose columns are the eigenvectors of <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-66f091b3d894ca4b0418d9487b6b7e8a_l3.png" class="ql-img-inline-formula " alt="&#92;&#83;&#105;&#103;&#109;&#97;" title="Rendered by QuickLaTeX.com" height="15" width="13" style="vertical-align: 0px;"/> and <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-f8016ff830b491e0b1f3122a41ccff3f_l3.png" class="ql-img-inline-formula " alt="&#76;" title="Rendered by QuickLaTeX.com" height="14" width="14" style="vertical-align: 0px;"/> is the diagonal matrix whose non-zero elements are the corresponding eigenvalues.</p>
<p>This means that we can represent the covariance matrix as a function of its eigenvectors and eigenvalues:<br />
<a name="id2430180844"></a>
<p class="ql-center-displayed-equation" style="line-height: 20px;"><span class="ql-right-eqno"> (15) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-1bd7a6edabf351786ae510e2c02d1663_l3.png" height="20" width="117" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;&#32; &#92;&#83;&#105;&#103;&#109;&#97;&#32;&#61;&#32;&#86;&#32;&#92;&#44;&#32;&#76;&#32;&#92;&#44;&#32;&#86;&#94;&#123;&#45;&#49;&#125; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>Equation (<a href="#id2430180844">15</a>) is called the eigendecomposition of the covariance matrix and can be obtained using a <a title="Singular Value Decomposition" href="https://en.wikipedia.org/wiki/Singular_value_decomposition" target="_blank">Singular Value Decomposition</a> algorithm. Whereas the eigenvectors represent the directions of the largest variance of the data, the eigenvalues represent the magnitude of this variance in those directions. In other words, <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-1f1ceff6690e6ea05bc7802220277816_l3.png" class="ql-img-inline-formula " alt="&#86;" title="Rendered by QuickLaTeX.com" height="14" width="16" style="vertical-align: 0px;"/> represents a rotation matrix, while <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-8235da3a0beb2b3fd48aef3af7ba37fa_l3.png" class="ql-img-inline-formula " alt="&#92;&#115;&#113;&#114;&#116;&#123;&#76;&#125;" title="Rendered by QuickLaTeX.com" height="22" width="32" style="vertical-align: -3px;"/> represents a scaling matrix. The covariance matrix can thus be decomposed further as:<br />
<a name="id2743526996"></a>
<p class="ql-center-displayed-equation" style="line-height: 20px;"><span class="ql-right-eqno"> (16) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-23df767e6fb3e95725feacf9467b019e_l3.png" height="20" width="133" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;&#32; &#92;&#83;&#105;&#103;&#109;&#97;&#32;&#61;&#32;&#82;&#32;&#92;&#44;&#32;&#83;&#32;&#92;&#44;&#32;&#83;&#32;&#92;&#44;&#32;&#82;&#94;&#123;&#45;&#49;&#125; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>where <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-7c0435dd691dcbce6e1b3121ba27bbd6_l3.png" class="ql-img-inline-formula " alt="&#82;&#61;&#86;" title="Rendered by QuickLaTeX.com" height="14" width="61" style="vertical-align: 0px;"/> is a rotation matrix and <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-3262ef230355c0fef4406d0637062a28_l3.png" class="ql-img-inline-formula " alt="&#83;&#61;&#92;&#115;&#113;&#114;&#116;&#123;&#76;&#125;" title="Rendered by QuickLaTeX.com" height="22" width="74" style="vertical-align: -3px;"/> is a scaling matrix.</p>
<p>In equation (<a href="#id1585768567">6</a>) we defined a linear transformation <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-35bf2c5b24b044c78d4ac3ecff5b2078_l3.png" class="ql-img-inline-formula " alt="&#84;&#61;&#82;&#32;&#92;&#44;&#32;&#83;" title="Rendered by QuickLaTeX.com" height="14" width="77" style="vertical-align: 0px;"/>. Since <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-7f83dd23b1b356198dd90895630ebcef_l3.png" class="ql-img-inline-formula " alt="&#83;" title="Rendered by QuickLaTeX.com" height="14" width="13" style="vertical-align: 0px;"/> is a diagonal scaling matrix, <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-e6305c54d65d420123b47b07e403a536_l3.png" class="ql-img-inline-formula " alt="&#83;&#32;&#61;&#32;&#83;&#94;&#123;&#92;&#105;&#110;&#116;&#101;&#114;&#99;&#97;&#108;&#125;" title="Rendered by QuickLaTeX.com" height="15" width="63" style="vertical-align: 0px;"/>. Furthermore, since <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-026035461a80f8e10b18e494d1116782_l3.png" class="ql-img-inline-formula " alt="&#82;" title="Rendered by QuickLaTeX.com" height="14" width="16" style="vertical-align: 0px;"/> is an orthogonal matrix, <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-945fcf37014dbf658e49128ad721040e_l3.png" class="ql-img-inline-formula " alt="&#82;&#94;&#123;&#45;&#49;&#125;&#32;&#61;&#32;&#82;&#94;&#123;&#92;&#105;&#110;&#116;&#101;&#114;&#99;&#97;&#108;&#125;" title="Rendered by QuickLaTeX.com" height="19" width="90" style="vertical-align: 0px;"/>. Therefore, <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-205b466d7f5da5d6cff27d6859693391_l3.png" class="ql-img-inline-formula " alt="&#84;&#94;&#123;&#92;&#105;&#110;&#116;&#101;&#114;&#99;&#97;&#108;&#125;&#32;&#61;&#32;&#40;&#82;&#32;&#92;&#44;&#32;&#83;&#41;&#94;&#123;&#92;&#105;&#110;&#116;&#101;&#114;&#99;&#97;&#108;&#125;&#32;&#61;&#32;&#83;&#94;&#123;&#92;&#105;&#110;&#116;&#101;&#114;&#99;&#97;&#108;&#125;&#32;&#92;&#44;&#32;&#82;&#94;&#123;&#92;&#105;&#110;&#116;&#101;&#114;&#99;&#97;&#108;&#125;&#32;&#61;&#32;&#83;&#32;&#92;&#44;&#32;&#82;&#94;&#123;&#45;&#49;&#125;" title="Rendered by QuickLaTeX.com" height="25" width="275" style="vertical-align: -6px;"/>. The covariance matrix can thus be written as:<br />
<a name="id3282722977"></a>
<p class="ql-center-displayed-equation" style="line-height: 24px;"><span class="ql-right-eqno"> (17) </span><span class="ql-left-eqno"> &nbsp; </span><img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-5889c4c6b55d1c90107dd9fc09195d1c_l3.png" height="24" width="212" class="ql-img-displayed-equation " alt="&#92;&#98;&#101;&#103;&#105;&#110;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;&#32; &#92;&#83;&#105;&#103;&#109;&#97;&#32;&#61;&#32;&#82;&#32;&#92;&#44;&#32;&#83;&#32;&#92;&#44;&#32;&#83;&#32;&#92;&#44;&#32;&#82;&#94;&#123;&#45;&#49;&#125;&#32;&#61;&#32;&#84;&#32;&#92;&#44;&#32;&#84;&#94;&#123;&#92;&#105;&#110;&#116;&#101;&#114;&#99;&#97;&#108;&#125;&#44; &#92;&#101;&#110;&#100;&#123;&#101;&#113;&#117;&#97;&#116;&#105;&#111;&#110;&#42;&#125;" title="Rendered by QuickLaTeX.com"/></p>
<p>In other words, if we apply the linear transformation defined by <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-35bf2c5b24b044c78d4ac3ecff5b2078_l3.png" class="ql-img-inline-formula " alt="&#84;&#61;&#82;&#32;&#92;&#44;&#32;&#83;" title="Rendered by QuickLaTeX.com" height="14" width="77" style="vertical-align: 0px;"/> to the original white data <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-6fe012cfdbc6f342dbd886ff568ed4ab_l3.png" class="ql-img-inline-formula " alt="&#68;" title="Rendered by QuickLaTeX.com" height="14" width="17" style="vertical-align: 0px;"/> shown by figure 7, we obtain the rotated and scaled data <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-69e87a5558d2fcd98b5a9d1292a4345e_l3.png" class="ql-img-inline-formula " alt="&#68;&#39;" title="Rendered by QuickLaTeX.com" height="17" width="23" style="vertical-align: 0px;"/> with covariance matrix <img src="https://www.visiondummy.com/wp-content/ql-cache/quicklatex.com-c518de6c373241128e4f4bbb640476a1_l3.png" class="ql-img-inline-formula " alt="&#84;&#32;&#92;&#44;&#32;&#84;&#94;&#123;&#92;&#105;&#110;&#116;&#101;&#114;&#99;&#97;&#108;&#125;&#32;&#61;&#32;&#92;&#83;&#105;&#103;&#109;&#97;&#39;&#32;&#61;&#32;&#82;&#32;&#92;&#44;&#32;&#83;&#32;&#92;&#44;&#32;&#83;&#32;&#92;&#44;&#32;&#82;&#94;&#123;&#45;&#49;&#125;" title="Rendered by QuickLaTeX.com" height="19" width="211" style="vertical-align: 0px;"/>. This is illustrated by figure 10:</p>
<div id="attachment_407" style="width: 950px" class="wp-caption aligncenter"><a href="http://www.visiondummy.com/wp-content/uploads/2014/04/lineartrans.png"><img class="size-full wp-image-407 " style="margin: 0px;" title="The covariance matrix represents a linear transformation of the original data" alt="The covariance matrix represents a linear transformation of the original data" src="http://www.visiondummy.com/wp-content/uploads/2014/04/lineartrans.png" width="940" height="451" /></a><p class="wp-caption-text"><b>Figure 10.</b> The covariance matrix represents a linear transformation of the original data.</p></div>
<p>The colored arrows in figure 10 represent the eigenvectors. The largest eigenvector, i.e. the eigenvector with the largest corresponding eigenvalue, always points in the direction of the largest variance of the data and thereby defines its orientation. Subsequent eigenvectors are always orthogonal to the largest eigenvector due to the orthogonality of rotation matrices.</p>
<h2>Conclusion</h2>
<p>In this article we showed that the covariance matrix of observed data is directly related to a linear transformation of white, uncorrelated data. This linear transformation is completely defined by the eigenvectors and eigenvalues of the data. While the eigenvectors represent the rotation matrix, the eigenvalues correspond to the square of the scaling factor in each dimension.</p>
<p><strong>If you&#8217;re new to this blog, don&#8217;t forget to subscribe, or <a href="https://twitter.com/vincent_spruyt" title="Follow me on Twitter!" target="_blank">follow me on twitter</a>!</strong><br />


<!-- Form created by Optin Forms plugin by Codeleon: create beautiful optin forms with ease! -->
<!-- http://codeleon.com/products/optin-forms/ -->
<div id="optinforms-form5-container" ><form method="post" target="_blank" action="http://visiondummy.us10.list-manage.com/subscribe/post?u=c435905e10ead915f3917d694&id=bbdfb33a9f"><div id="optinforms-form5" style="background:#ffffff;"><div id="optinforms-form5-container-left"><div id="optinforms-form5-title" style="font-family:News Cycle; font-size:24px; line-height:24px; color:#fd4326">JOIN MY NEWSLETTER</div><!--optinforms-form5-title--><input type="text" id="optinforms-form5-name-field" name="FNAME" placeholder="Enter Your Name" style="font-family:Arial, Helvetica, sans-serif; font-size:12px; color:#000000" /><input type="text" id="optinforms-form5-email-field" name="EMAIL" placeholder="Enter Your Email" style="font-family:Arial, Helvetica, sans-serif; font-size:12px; color:#000000" /><input type="submit" name="submit" id="optinforms-form5-button" value="SUBSCRIBE" style="font-family:Arial, Helvetica, sans-serif; font-size:16px; color:#FFFFFF; background-color:#fd4326" /></div><!--optinforms-form5-container-left--><div id="optinforms-form5-container-right"><div id="optinforms-form5-subtitle" style="font-family:Georgia; font-size:16px; color:#444444">Receive my newsletter to get notified when new articles and code snippets become available on my blog!</div><!--optinforms-form5-subtitle--><div id="optinforms-form5-disclaimer" style="font-family:Georgia, Times New Roman, Times, serif; font-size:14px; color:#727272">We all hate spam. Your email address will not be sold or shared with anyone else.</div><!--optinforms-form5-disclaimer--></div><!--optinforms-form5-container-right--><div class="clear"></div></div><!--optinforms-form5--><div class="clear"></div></form></div><!--optinforms-form5-container--><div class="clear"></div>
<!-- / Optin Forms -->

<style type='text/css'></style></p>
<p>The post <a rel="nofollow" href="https://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/">A geometric interpretation of the covariance matrix</a> appeared first on <a rel="nofollow" href="https://www.visiondummy.com">Computer vision for dummies</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/feed/</wfw:commentRss>
		<slash:comments>47</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

Minified using disk
Page Caching using disk: enhanced
Database Caching using disk
Object Caching 827/829 objects using disk

 Served from: www.visiondummy.com @ 2026-05-02 10:35:09 by W3 Total Cache -->