Home » Math basics » Statistics » How to draw a covariance error ellipse?

# How to draw a covariance error ellipse?

## Introduction

In this post, I will show how to draw an error ellipse, a.k.a. confidence ellipse, for 2D normally distributed data. The error ellipse represents an iso-contour of the Gaussian distribution, and allows you to visualize a 2D confidence interval. The following figure shows a 95% confidence ellipse for a set of 2D normally distributed data samples. This confidence ellipse defines the region that contains 95% of all samples that can be drawn from the underlying Gaussian distribution.

Figure 1. 2D confidence ellipse for normally distributed data

In the next sections we will discuss how to obtain confidence ellipses for different confidence values (e.g. 99% confidence interval), and we will show how to plot these ellipses using Matlab or C++ code.

## Axis-aligned confidence ellipses

Before deriving a general methodology to obtain an error ellipse, let’s have a look at the special case where the major axis of the ellipse is aligned with the X-axis, as shown by the following figure:

Figure 2. Confidence ellipse for uncorrelated Gaussian data

The above figure illustrates that the angle of the ellipse is determined by the covariance of the data. In this case, the covariance is zero, such that the data is uncorrelated, resulting in an axis-aligned error ellipse.

 8.4213 0 0 0.9387

Furthermore, it is clear that the magnitudes of the ellipse axes depend on the variance of the data. In our case, the largest variance is in the direction of the X-axis, whereas the smallest variance lies in the direction of the Y-axis.

In general, the equation of an axis-aligned ellipse with a major axis of length and a minor axis of length , centered at the origin, is defined by the following equation:

(1)

In our case, the length of the axes are defined by the standard deviations and of the data such that the equation of the error ellipse becomes:

(2)

where defines the scale of the ellipse and could be any arbitrary number (e.g. s=1). The question is now how to choose , such that the scale of the resulting ellipse represents a chosen confidence level (e.g. a 95% confidence level corresponds to s=5.991).

Our 2D data is sampled from a multivariate Gaussian with zero covariance. This means that both the x-values and the y-values are normally distributed too. Therefore, the left hand side of equation (2) actually represents the sum of squares of independent normally distributed data samples. The sum of squared Gaussian data points is known to be distributed according to a so called Chi-Square distribution. A Chi-Square distribution is defined in terms of ‘degrees of freedom’, which represent the number of unknowns. In our case there are two unknowns, and therefore two degrees of freedom.

Therefore, we can easily obtain the probability that the above sum, and thus equals a specific value by calculating the Chi-Square likelihood. In fact, since we are interested in a confidence interval, we are looking for the probability that is less then or equal to a specific value which can easily be obtained using the cumulative Chi-Square distribution. As statisticians are lazy people, we usually don’t try to calculate this probability, but simply look it up in a probability table: https://people.richland.edu/james/lecture/m170/tbl-chi.html.

For example, using this probability table we can easily find that, in the 2-degrees of freedom case:

Therefore, a 95% confidence interval corresponds to s=5.991. In other words, 95% of the data will fall inside the ellipse defined as:

(3)

Similarly, a 99% confidence interval corresponds to s=9.210 and a 90% confidence interval corresponds to s=4.605.

The error ellipse show by figure 2 can therefore be drawn as an ellipse with a major axis length equal to and the minor axis length to .

## Arbitrary confidence ellipses

In cases where the data is not uncorrelated, such that a covariance exists, the resulting error ellipse will not be axis aligned. In this case, the reasoning of the above paragraph only holds if we temporarily define a new coordinate system such that the ellipse becomes axis-aligned, and then rotate the resulting ellipse afterwards.

In other words, whereas we calculated the variances and parallel to the x-axis and y-axis earlier, we now need to calculate these variances parallel to what will become the major and minor axis of the confidence ellipse. The directions in which these variances need to be calculated are illustrated by a pink and a green arrow in figure 1.

Figure 1. 2D confidence ellipse for normally distributed data

These directions are actually the directions in which the data varies the most, and are defined by the covariance matrix. The covariance matrix can be considered as a matrix that linearly transformed some original data to obtain the currently observed data. In a previous article about eigenvectors and eigenvalues we showed that the direction vectors along such a linear transformation are the eigenvectors of the transformation matrix. Indeed, the vectors shown by pink and green arrows in figure 1, are the eigenvectors of the covariance matrix of the data, whereas the length of the vectors corresponds to the eigenvalues.

The eigenvalues therefore represent the spread of the data in the direction of the eigenvectors. In other words, the eigenvalues represent the variance of the data in the direction of the eigenvectors. In the case of axis aligned error ellipses, i.e. when the covariance equals zero, the eigenvalues equal the variances of the covariance matrix and the eigenvectors are equal to the definition of the x-axis and y-axis. In the case of arbitrary correlated data, the eigenvectors represent the direction of the largest spread of the data, whereas the eigenvalues define how large this spread really is.

Thus, the 95% confidence ellipse can be defined similarly to the axis-aligned case, with the major axis of length and the minor axis of length , where and represent the eigenvalues of the covariance matrix.

To obtain the orientation of the ellipse, we simply calculate the angle of the largest eigenvector towards the x-axis:

(4)

where is the eigenvector of the covariance matrix that corresponds to the largest eigenvalue.

Based on the minor and major axis lengths and the angle between the major axis and the x-axis, it becomes trivial to plot the confidence ellipse. Figure 3 shows error ellipses for several confidence values:

Confidence ellipses for normally distributed data

## Conclusion

In this article we showed how to obtain the error ellipse for 2D normally distributed data, according to a chosen confidence value. This is often useful when visualizing or analyzing data and will be of interest in a future article about PCA.

Furthermore, source code samples were provided for Matlab and C++.

Summary
Article Name
How to draw an error ellipse representing the covariance matrix?
Author
Description
In this article, we show how to draw the error ellipse for normally distributed data, given a chosen confidence value.

There is a little mistake in the text (not in the matlab source code), the major (minor) axis are 2*sqrt[\lambda_1 * 5.991] (2*sqrt[\lamba_2 * 5.991]). The axis lengths are related with standard deviations, whereas \lambda_1 and \lambda_2 come from the covariance matrix (STD = sqrt(variance))

2. You are right, tnx for spotting this! I fixed it now in the text.

3. Filip says:

I love you man, you saved my life with this blog. Don’t stop posting stuff like this.

4. Alvaro Cáceres says:

Thanks Vincent! I find very useful your post!
One question, If I want to know if an observation is under the 95% of confidence, can I replace the value under this formula (matlab):
a=chisquare_val*sqrt(largest_eigenval)
b=chisquare_val*sqrt(smallest_eigenval)
(x/a)^2 + (y/b)^2 <= 5.991 ?
Thanks

5. Krishna says:

Very helpful. Thanks. How is it different for uniformly distributed data ?

6. Hyeree Kim says:

Thank you very much. Your post is very useful!
I have a question in the matlab code.
What the (chisquare_val = 2.4477)?
I don’t know the meaning 2.4477.

7. MAB says:

Hi
How can I calculate the length of the principal axes if I get negative eigenvalues from the covariance matrix?

• -- glen says:

I think they can’t be negative. Variance can’t be negative, and there are limits to the covariance, though it can be negative.

In cases where the eigenvalues are close, they might go negative due to rounding error in the calculation.

8. Jon Hauris says:

Vincent, you are great, thank you. I’m naming my first born after you!

9. Starter says:

Hi Vincent,

Is this method still applicable when the centre of the ellipse does not coincide with the origin of the coordinating system?

Thank you,

10. J Bashir says:

Hi,
your Post helped me a lot! Thx! Since I needed the error ellipses for a specific purpose, I adapted your code in Mathematica.
If you don’t mind, I ‘d like to share it:

(*Random Data generation*)
s = 2;
rD = Table[RandomReal[], {i, 500}];

x = RandomVariate[NormalDistribution[#, 0.4]] & /@ (+s rD);
y = RandomVariate[NormalDistribution[#, 0.4]] & /@ (-s rD);
data = {x, y}\[Transpose];

(*define error Ellipse*)
ErrorEllipse[data_, percLevel_, points_: 100] :=
Module[
{eVa, eVec, coors, dchi, thetaGrid, ellipse, rEllipse},
{eVa, eVec} = Eigensystem@Covariance[data];

(* Get the coordinates of the data mean*)
coors = Mean[data];

(*Get the perc [%] confidence interval error ellipse*)
dchi = \[Sqrt]Quantile[ChiSquareDistribution[2], percLevel/100];

(* define error ellipse in x and y coordinates*)
thetaGrid = Table[i, {i, 0, 2 \[Pi], 2 \[Pi]/99}];
ellipse = {dchi \[Sqrt]eVa[[1]] Cos[thetaGrid],
dchi \[Sqrt]eVa[[2]] Sin[thetaGrid]};

(* rotate the ellipse and center ellipse at coors*)
rEllipse = coors + # & /@ (ellipse\[Transpose].eVec)
];

(* visualize results*)
percentOneSigma = 66.3;
percentTwoSigma = 95.4;

Show[
ListPlot[data],
ListLinePlot[ErrorEllipse[data, percentOneSigma],
PlotStyle -> {Thick, Red}],
ListLinePlot[ErrorEllipse[data, percentTwoSigma],
PlotStyle -> {Thick, Blue}],
ListPlot[{Mean[data]}, PlotStyle -> {Thick, Red}]
]

Best,
bashir

11. Meysam says:

Hi, thanks a lot for the code. Just a little bit comment; in general chisquare_val=sqrt(chi2inv(alpha, n)) where alpha=0.95 is confidence level and n=degree of freedom i.e, the number of parameter=2.
here we go a little bit change to make the code a little bit more beautiful
Cheers,
Meysam

clc
clear

% Create some random data with mean=m and covariance as below:

m = [10;20]; % mean value
n = length(m);
covariance = [2, 1; 1, 2];
nsam = 1000;

x = (repmat(m, 1, nsam) + sqrtm(covariance)*randn(n,nsam))’;

y1=x(:,1);
y2=x(:,2);

mean(y1)
mean(y2)
cov(y1,y2)

data = [y1 y2];

% Calculate the eigenvectors and eigenvalues
%covariance = cov(data);
[eigenvec, eigenval ] = eig(covariance);

% Get the index of the largest eigenvector
[largest_eigenvec_ind_c, r] = find(eigenval == max(max(eigenval)));
largest_eigenvec = eigenvec(:, largest_eigenvec_ind_c);

% Get the largest eigenvalue
largest_eigenval = max(max(eigenval));

% Get the smallest eigenvector and eigenvalue
if(largest_eigenvec_ind_c == 1)
smallest_eigenval = max(eigenval(:,2));
smallest_eigenvec = eigenvec(:,2);
else
smallest_eigenval = max(eigenval(:,1));
smallest_eigenvec = eigenvec(1,:);
end

% Calculate the angle between the x-axis and the largest eigenvector
phi = atan2(largest_eigenvec(2), largest_eigenvec(1));

% This angle is between -pi and pi.
% Let’s shift it such that the angle is between 0 and 2pi
if(phi < 0)
phi = phi + 2*pi;
end

% Get the coordinates of the data mean

% Get the 95% confidence interval error ellipse
%chisquare_val = 2.4477;
alpha = 0.95;
b = chi2inv(alpha, n);

theta = linspace(0,2*pi,100);

X0=m(1);
Y0=m(2);
a=sqrt(b*largest_eigenval);
b=sqrt(b*smallest_eigenval);

%Define a rotation matrix
R = [ cos(phi) sin(phi); -sin(phi) cos(phi) ];
Q=[ a*cos( theta);b*sin( theta)]';

%let's rotate the ellipse to some angle phi
r_ellipse = Q * R;

% Draw the error ellipse
plot(r_ellipse(:,1) + X0,r_ellipse(:,2) + Y0,'g','linewidth',2)
hold on;

% Plot the original data
plot(data(:,1), data(:,2), 'o');
g=plot(m(1),m(2),'o','markersize', 10);
set(g,'MarkerEdgeColor','r','MarkerFaceColor','r')
mindata = min(min(data));
maxdata = max(max(data));
Xlim([mindata-3, maxdata+3]);
Ylim([mindata-3, maxdata+3]);
hold on;

%Plot the eigenvectors
quiver(X0, Y0, largest_eigenvec(1)*sqrt(largest_eigenval), largest_eigenvec(2)*sqrt(largest_eigenval), '-m', 'LineWidth',2);
quiver(X0, Y0, smallest_eigenvec(1)*sqrt(smallest_eigenval), smallest_eigenvec(2)*sqrt(smallest_eigenval), '-g', 'LineWidth',2);
hold on;

%Set the axis labels
xlabel(' Xdata ','interpreter','latex','fontsize',18)
ylabel('Ydata ','interpreter','latex','fontsize',18)
set(gca, 'fontsize',16);

12. Yiti says:

Hello everyone,
I am trying to do this plots in python, I have found the following code:

x = [5,7,11,15,16,17,18]
y = [8, 5, 8, 9, 17, 18, 25]
cov = np.cov(x, y)
lambda_, v = np.linalg.eig(cov)
lambda_ = np.sqrt(lambda_)
from matplotlib.patches import Ellipse
import matplotlib.pyplot as plt
ax = plt.subplot(111, aspect=’equal’)
for j in xrange(1, 4):
ell = Ellipse(xy=(np.mean(x), np.mean(y)),
width=lambda_[0]*j*2, height=lambda_[1]*j*2,
ell.set_facecolor(‘none’)
plt.scatter(x, y)
plt.show()

which draws a 1, 2 and 3 standard deviation ellipses.

I dont understand to which percentage (95 % etc) does each j corresponds. Could anyone please give me a hint?? Cheers and thanks,
Yisel

13. sonny says:

Hi Vincent, thanks for a great post. I’m a little bit curious, but the mahalanobi distance is more or less the same principle just for higher dimensions? Calling it density contours, error ellipses, or confidence regions? Thanks!

14. Chris says:

Great write up. I was wondering if you have a reference for this method?

15. John Thompson says:

square root of chi-square value (i.e. 5.991).

16. Luis says:

Hi Vincent, the post was excellent. Could you include a short comment under what conditions the ellipsis switch to have a “banana shape”? That is common in cosmological data analysis. Thank you.