Comments for Computer vision for dummies http://www.visiondummy.com A blog about intelligent algorithms, machine learning, computer vision, datamining and more. Sat, 18 Jun 2016 02:53:45 +0000 hourly 1 http://wordpress.org/?v=3.8.16 Comment on How to draw a covariance error ellipse? by Eileen KC http://www.visiondummy.com/2014/04/draw-error-ellipse-representing-covariance-matrix/#comment-283 Sat, 18 Jun 2016 02:53:45 +0000 http://www.visiondummy.com/?p=288#comment-283 e1 = find(dis1 1);

]]>
Comment on How to draw a covariance error ellipse? by Eileen KC http://www.visiondummy.com/2014/04/draw-error-ellipse-representing-covariance-matrix/#comment-282 Sat, 18 Jun 2016 02:52:16 +0000 http://www.visiondummy.com/?p=288#comment-282 Sorry…copy/paste not working. The code needs:

e1 = find(dis1 1);

]]>
Comment on How to draw a covariance error ellipse? by Eileen KC http://www.visiondummy.com/2014/04/draw-error-ellipse-representing-covariance-matrix/#comment-281 Sat, 18 Jun 2016 02:50:34 +0000 http://www.visiondummy.com/?p=288#comment-281 The following colors the data points RED/BLUE depending on if the datapoint is inside/outside of the ellipse

Ntmp = size(data,1);
datatmp = (data – repmat([X0 Y0],Ntmp,1))*R’;
dis1 = (datatmp(:,1)/a).^2+(datatmp(:,2)/b).^2;
e1 = find(dis1 1);
plot(data(e1,1),data(e1,2),’r.’);hold on;
plot(data(e2,1),data(e2,2),’b.’);hold on;
xpct = size(e1,1)/Ntmp;
fprintf(‘%5.3f pct inside ellipse\n’,xpct);

]]>
Comment on How to draw a covariance error ellipse? by Eileen KC http://www.visiondummy.com/2014/04/draw-error-ellipse-representing-covariance-matrix/#comment-280 Thu, 16 Jun 2016 20:57:29 +0000 http://www.visiondummy.com/?p=288#comment-280 Great work. Can you add something: Color all data values RED inside 95% ellipse and all data values outside BLUE (see post from June 16, 2014). The following code attempts this but is NOT working:

% See if (x/a)^2 + (y/b)^2 <= 5.991 ?
d = (data(:,1)./a).^2+(data(:,2)./b).^2;
e1=find(d=s);
plot(data(e1,1), data(e1,2), ‘r.’);hold on; %Plot data inside ellipse
plot(data(e2,1), data(e2,2), ‘b.’);hold on; %Plot data outside ellipse
plot(r_ellipse(:,1) + X0,r_ellipse(:,2) + Y0,’k-’);hold off; %Plot ellipse

]]>
Comment on How to draw a covariance error ellipse? by Jamie Macaulay http://www.visiondummy.com/2014/04/draw-error-ellipse-representing-covariance-matrix/#comment-279 Wed, 08 Jun 2016 10:52:15 +0000 http://www.visiondummy.com/?p=288#comment-279 Hi. Thanks a lot for the tutorial and detailed explanation. Great Work.

I had a go at hacking together a 3D version in MATLAB. Code below just in case anyone is interested.

%based on http://www.visiondummy.com/2014/04/draw-error-ellipse-representing-covariance-matrix/

clear;
close all;

% Create some random data
s = [1 2 5];
x = randn(334,1);
y1 = normrnd(s(1).*x,1);
y2 = normrnd(s(2).*x,1);
y3 = normrnd(s(3).*x,1);
data = [y1 y2 y3];

% Calculate the eigenvectors and eigenvalues
covariance = cov(data);
[eigenvec, eigenval ] = eig(covariance);

% Get the index of the largest eigenvector
%[largest_eigenvec_ind_c, ~] = find(eigenval == max(max(eigenval)));

[B, largest_eigenvec_ind_c] = sort(max(eigenval),’descend’);
largest_eigenvec = eigenvec(:, largest_eigenvec_ind_c(1));

% Get the largest eigenvalue
largest_eigenval = max(max(eigenval));

% % Get the smallest eigenvector and eigenvalue
% if(largest_eigenvec_ind_c == 1)
% smallest_eigenval = max(eigenval(:,2));
% smallest_eigenvec = eigenvec(:,2);
% else
% smallest_eigenval = max(eigenval(:,1));
% smallest_eigenvec = eigenvec(:,1);
% end

% Calculate the angle between the x-axis and the largest eigenvector
angle = atan2(largest_eigenvec(2), largest_eigenvec(1));
angle2 = atan2(largest_eigenvec(3), largest_eigenvec(1));
angle3 = atan2(largest_eigenvec(3), largest_eigenvec(2));

% % This angle is between -pi and pi.
% % Let’s shift it such that the angle is between 0 and 2pi
% if(angle < 0)
% angle = angle + 2*pi;
% end
% if(angle2 < 0)
% angle2 = angle2 + 2*pi;
% end
% if(angle3 < 0)
% angle3 = angle3 + 2*pi;
% end

% Get the coordinates of the data mean
avg = mean(data);

% Get the 95% confidence interval error ellipse
chisquare_val = 2.4477;
theta_grid = linspace(0,2*pi);
phi = angle;
X0=avg(1);
Y0=avg(2);
Z0=avg(3);
a=chisquare_val*sqrt(largest_eigenval);
b=chisquare_val*sqrt(max(eigenval(:,largest_eigenvec_ind_c(2))));
c=chisquare_val*sqrt(max(eigenval(:,largest_eigenvec_ind_c(3))));

hold on;
%create an ellipsoid.
[x,y,z] = ellipsoid(X0,Y0,Z0,a,b,c,40);
% create the rotation matrix;
R=createEulerAnglesRotation(angle, angle2, angle3);
r_ellipse = [x(1,:);y(1,:);z(1,:)]' * R(1:3,1:3);

h=surf(x,y,z,'EdgeColor','none','FaceAlpha',0.1);
t = hgtransform;
set(h,'Parent',t)
ry_angle = -15*pi/180; % Convert to radians
R = createEulerAnglesRotation(-angle2, -angle3, angle); % roll, pitch, heading
set(t,'Matrix',R);

% Plot the original data
plot3(data(:,1), data(:,2), data(:,3), '.');
mindata = min(min(data));
maxdata = max(max(data));
xlim([mindata-3, maxdata+3]);
ylim([mindata-3, maxdata+3]);

% % Plot the eigenvectors
% quiver(X0, Y0, largest_eigenvec(1)*sqrt(largest_eigenval), largest_eigenvec(2)*sqrt(largest_eigenval), '-m', 'LineWidth',2);
% quiver(X0, Y0, smallest_eigenvec(1)*sqrt(smallest_eigenval), smallest_eigenvec(2)*sqrt(smallest_eigenval), '-g', 'LineWidth',2);

% % Set the axis labels
hXLabel = xlabel('x');
hYLabel = ylabel('y');
hZLabel = zlabel('z');
hold off

]]>
Comment on What are eigenvectors and eigenvalues? by Kaleo Brandt http://www.visiondummy.com/2014/03/eigenvalues-eigenvectors/#comment-277 Fri, 03 Jun 2016 05:53:15 +0000 http://www.visiondummy.com/?p=111#comment-277 I was also confused about this. After researching for a good hour or two on determinants and invertible matrices, I think it’s safe to say that a non-invertible matrix either:
– Has a row (or column) with all zeros
– Has at least two rows (or columns) that are equivalent.

The underlying reason for this (and its correlation with determinants) is that the determinant of a matrix is essentially the area in R^n space of the columns of the matrix (see http://math.stackexchange.com/questions/668/whats-an-intuitive-way-to-think-about-the-determinant).
So, if two of the columns of the matrix are equivalent, that means that they’re parallel, and the area of the parallelepiped formed has an area of zero. (It would also have an area of zero if one of the vectors is a null-vector).

So I think the reason is that, unless v is the null-vector of all zeros, one of the above properties is necessary for a linear combination of the rows to add up to zero (This is the part I’m unsure about, because the dimensions of equation (2) isn’t 1×1, is it?).

If someone actually knows what they’re talking about, please correct me. This is just my understanding after googling some stuff.

]]>
Comment on About me by Sounak Dey http://www.visiondummy.com/aboutme/#comment-271 Tue, 10 May 2016 13:16:33 +0000 http://217.199.187.74/visiondummy.com/?page_id=2#comment-271 Thank you so much and appreciate a fantastic effort

]]>
Comment on A geometric interpretation of the covariance matrix by ninjajack http://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/#comment-269 Thu, 28 Apr 2016 06:20:36 +0000 http://www.visiondummy.com/?p=440#comment-269 Very useful. Thanks!

]]>
Comment on Why divide the sample variance by N-1? by scott http://www.visiondummy.com/2014/03/divide-variance-n-1/#comment-267 Fri, 22 Apr 2016 12:20:55 +0000 http://www.visiondummy.com/?p=196#comment-267 Check “we can write their joint likelihood function as the sum of all individual likelihoods…” directly before equation 5. I think it should read “the product of individual likelihoods” instead of sum.

Thank you for your articles, I have found them very helpful.

]]>
Comment on A geometric interpretation of the covariance matrix by Shibumon Alampatta http://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/#comment-266 Thu, 21 Apr 2016 04:56:52 +0000 http://www.visiondummy.com/?p=440#comment-266 Great article, but one doubt. It was mentioned that direction of eigen vector remains unchanged when linear transformation is applied. But in Fig. 10, direction of the vector is also changed. Can you please explain it?

]]>