How to value the proximity of different 2D distribution?

3 visualizaciones (últimos 30 días)
zhe zhu
zhe zhu el 18 de Dic. de 2021
Respondida: Nipun el 30 de Mayo de 2024
Dear MATLAB community,
I'm working in the field of laser physics and I want to know what methods can be used to evaluate the 2D Gaussian distribution?
I measured the intensity distribution of four light spots, as you can be seen from the figure below, 1 and 2 are relatively close to the Gaussian distribution, 3 is getting worse, and 4 is the worst. I have a 2D matrix of their intensity. Is there a function to evaluate their distribution closeness relative to a perfect 2D gaussian distribution (or matrix 1)? (such as f(matrix(1))=1, f(matrix(2))=0.9, f(matrix(3))=0.3, f(matrix(4))=0.1)
(Besides,here the relative intensity is more meaningful than the absolute intensity, so normalized matrices are used in the comparison. )
I have tested the Kullback–Leibler divergence function in the form D_KL(P||Q)=sum(P.*log(P./Q),'all','omitnan') (here P is the normalized 2D matrix 2,3,4 and Q is matrix 1), and the result are pretty pool
DKL =
1.0e+03 *
0 6.8731 5.6944 0.0839
I also tried the divergence function like div=var(A,1,'all'), and the result is not very satisfactory. A friend from the Department of Statistics told me to check functions like lillietest, chi2gof, and kstest . Well, those functions only accept one-dimensional data. This is not a serious problem, but I am really curious about how to get a simple description of such two-dimensional gaussian distribution,like those test function did to one-dimension data.
Sorry for my pool English and sincerely appreciate your answers :)
Here is several tested evaluation in my code:
load ls.mat
for i=1:4;
ene(i)=sum(sum(ls(:,:,i)));
labc=ls(:,:,i);
es=sort(labc(:));
enetop(i)=mean(es(end-100:end));
% enetop=median(es);
enefwhm(i)=sum(sum(ls(:,:,i)>=0.5*enetop(i)));
enes(i)=sum(sum(ls(:,:,i)>=0.1*enetop(i)));
eness(i)=sum(sum(ls(:,:,i)>=0.1*enetop(i)));
end
enerate=enefwhm./ene;
enerate1=enefwhm./enes;
enerate2=enefwhm./eness;
%DKL
lss=ls;
% lss(lss<=0.5*max(lss(:)))=0;
Q=lss(:,:,1)+1;
Q=Q/max(Q(:));
for k=1:4;
P=lss(:,:,k)+1;
P=P./max(P(:));
A=(P.*log(P./Q));
DKL(k)= sum(A,'all','omitnan');
end
%div
for j=1:4;
div(j)=var(ls(:,:,j),1,'all');
end

Respuestas (1)

Nipun
Nipun el 30 de Mayo de 2024
Hi Zhe,
I understand that you want to evaluate the proximity of different 2D distributions. One way to do this is by calculating the Kullback-Leibler (KL) divergence or using metrics like the Bhattacharyya distance. Here’s an example using the Bhattacharyya distance:
% Example 2D distributions
dist1 = [mean1; cov1]; % Mean and covariance of distribution 1
dist2 = [mean2; cov2]; % Mean and covariance of distribution 2
% Define mean and covariance matrices
mean1 = [1; 2];
cov1 = [1 0.5; 0.5 2];
mean2 = [1.5; 2.5];
cov2 = [2 0.4; 0.4 1];
% Bhattacharyya distance calculation
meanDiff = mean1 - mean2;
covAvg = (cov1 + cov2) / 2;
distance = 0.125 * (meanDiff' / covAvg * meanDiff) + 0.5 * log(det(covAvg) / sqrt(det(cov1) * det(cov2)));
disp(distance);
This code calculates the Bhattacharyya distance between two 2D Gaussian distributions.
Hope this helps.
Regards,
Nipun

Categorías

Más información sobre Hypothesis Tests en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by