what does eigenvalues expres in the covariance matrix?

310 visualizaciones (últimos 30 días)
Mohamed Moawed
Mohamed Moawed el 23 de Abr. de 2013
Comentada: TUSHAR MURATKAR el 18 de Feb. de 2020
is there a relationship between a covariance matrix and eigenvalues? like an example
Let us consider a 321 × 261 image dimention 321 × 261 = 83781. We have only 32 observations and 83781 unknowns then we have a matrix of (32 row X 83781 column)
then we will calculate the covariance matrix (32 X 32) so we get 32 eigenvalues the question is: does these eigenvalues express the 32 images? or there is no any relationship between eigenvalues and images
thanks for you,
  8 comentarios
Mohamed Moawed
Mohamed Moawed el 23 de Abr. de 2013
what about image point of view??? when i want to select some eigenvectors.how can i say this eigenvector for this observation or for this image
Vincent Spruyt
Vincent Spruyt el 10 de Mzo. de 2015
The eigenvalues in this case represent the magnitude of the spread in the direction of the principal components. If you data has a diagonal covariance matrix (covariances are zero), then the eigenvalues are equal to the variances:
If the covariance matrix is not diagonal, then the eigenvalues still define the variance of the data along the the principal components, whereas the covariance matrix operates along the axes:
Here is an article (and the source of the above images) that discusses this in more detail: http://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/

Iniciar sesión para comentar.

Respuesta aceptada

Kye Taylor
Kye Taylor el 23 de Abr. de 2013
Editada: Kye Taylor el 23 de Abr. de 2013
Long story short: The eigenvalues of the covariance matrix encode the variability of the data in an orthogonal basis that captures as much of the data's variability as possible in the first few basis functions (aka the principle component basis).
For example, this code creates an ellipse, whos major axis is the x-axis, and whos minor axis is the y-axis.
t = linspace(0,2*pi,256);
data = [cos(t);0.2*sin(t)];
plot(data(1,:),data(2,:),'.')
axis([-1,1,-1,1])
Now, compute the variance of the data's coordinates
% tranpose to get variance down each column
% computes variance of each coordinate of the data
v = var(data')
Finally, observe the eigenvalues of the covariance matrix are equal to the variance of the data's coordinates
% need to transpose since input to cov must have
% rows = observations
% columns = variables
l = eig(cov(data'))
In other words, v and l contain the same values. The order may be different because the eig function returns eigenvalues in no particular order.
Note that if the data is rotated so that the major and minor axes are no longer the x and y axes, then var(data') no longer computes the variance about the coordinates from their principle axes, but eig(cov(data')) does automatically since the eigenvectors are principle components.
For example
% rotate everything
r = [cos(pi/4),-sin(pi/4);sin(pi/4),cos(pi/4)]; % rotation matrix
rData = r*data;
hold on
plot(rData(1,:),rData(2,:),'r.') % plot rotated data
vr = var(rData') % different
lr = eig(cov(rData')) % should be same as l
  4 comentarios
Mohamed Moawed
Mohamed Moawed el 24 de Abr. de 2013
wow, that is nice, thanks for your kindly reply. i need more thing, in my work i have several fMRI brain images for Alzheimer Disease. and i get these images and compare between each part in these images to get the parts that are different between positive and negative images. how can i get these parts? i want to use PCA to get eigenvalues that they are the principle component of these images but how can i say this eigenvalue for this part in this image?
Kye Taylor
Kye Taylor el 24 de Abr. de 2013
Open up a new thread... and kindly accept my answer.

Iniciar sesión para comentar.

Más respuestas (1)

Shashank Prasanna
Shashank Prasanna el 23 de Abr. de 2013
Editada: Shashank Prasanna el 23 de Abr. de 2013
Essentially what you are describing are the principal components of your data.
Its a popularly used dimensionality reduction technique, for example to make your image smaller such that it still retains most of its variance.
The PCA command in MATLAB does all this for you directly.
  2 comentarios
Mohamed Moawed
Mohamed Moawed el 23 de Abr. de 2013
yes i know i want to use PCA . but i dont understand how to use it with more than one image to get the principle component of these images
TUSHAR MURATKAR
TUSHAR MURATKAR el 18 de Feb. de 2020
In my reference paper related to wireless communication the covariance matrix is made from vector comprising of channel coefficients. And non zero eigen values of the covariance matrix are calculated. What does this signify with reference to wireless communication.

Iniciar sesión para comentar.

Categorías

Más información sobre Subspace Methods en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by