Obtaining the Bonferroni 95% confidence interval between two variables
4 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
I have Matrix A [1000,1] of variable A readings at 1000 locations, I also have Matrix B [1000,1] of variable B readings at the same 1000 locations as Variable A. How can i obtain the Bonferroni 95% confidence interval between those two variables.
0 comentarios
Respuesta aceptada
BN
el 3 de Jun. de 2020
Hello, my friend,
I learn this a few days ago. In order to gain Bonferroni p-value you should have STATS, you may not hear about STATS, STATS is a stats output structure from ANOVA2 (anova2) if A and B are normal, or KRUSKALWALLIS (kruskalwallis), or FRIEDMAN (friedman), if they are not normal.
So you need to perform one of these tests first and latter achieve Bonferroni p-value. But which one of anova2, kruskalwallis, or friedman you need to use? It depends on your purpose and also your data. I'm wanna tell you that if your data is normal then you can use anova2 family. And if your data is not normal you need to use kruskalwallis, or friedman tests.
But let's explain it for you as clear I can, Since you want to achieve Bonforini p-value, I can tell that you want to know if A and B have significant differences or not (in 0.05 significant level) so this is your purpose that I talk about it above. Now, what about your data? how do you can know whether your data are normal or not? I tell you, you need to conduct the Kolmogorov Smirnov test (kstest2) first. Then check if the null hypothesis rejects or not. If it's rejected you need to use kruskalwallis, or friedman tests (I recommend kruskalwallis).
In order to summarize this answer, I don't talk about the kstest2 and also the kruskalwallis test, if you look at the documentation of these two tests I'm sure you can do them.
Now, here an example:
clear; clc;
A = rand(1000,1); % random data set just like your A
B = rand(1000,1); % random data set just like your B
AB = [A B]; % combine them in AB
% check if AB is normal or not using Kolmogorov Smirnov test
[h, p, k2stat] = kstest(AB);
% if h=1 then normality rejects and if h=0 the AB is normal
% p give us p-value of the test
% since in this example h=1 so my AB is not normal, so I go ahead and use
% Kruskal-Wallis test. If h been =0 then I use ANOVA.
[p, tbl, stats] = kruskalwallis(AB);
% you can see the p-value in p. Since p is greater than the significance level
% that I need (0.01 or 0.05 depends on your wish) the null hypothesis of
% Kruskal-Wallis is rejected so A and B have not any significance differences
% And you now have stats which you can use it in order to found Bonforini
% p-value:
c = multcompare(stats,'CType','bonferroni'); %LOOK we use stats here
Now open c, the last column is the p-value of Bonforini. You said you need to check it at the 0.05 significance level. So since
0.4345>0.05 your null hypothesis is rejected (0.4345 is the p-value of Bonforini in this example).
If you wanted to check with 0.01 you can say since 0.4345>0.01 my null hypothesis rejects.
So dear Ahmed, here all things that I knew, I learned them in this week using documentation of Matlab, so I'm quite sure if you have any problem you may have your answer in the documentation. Also, I check this question if any other help I can provide for you, my friend.
Best Regards,
Behzad
2 comentarios
BN
el 3 de Jun. de 2020
Editada: BN
el 3 de Jun. de 2020
I think I should use t-student (ttest2) too (in the first), but when I checked my data I found that they are not normal and since t student work just for normal data I used the Kruskal-Wallis test. So I recommend the first check if your A and B are normal or not. If they are normal then you can use t student test. In this case, you can use Guy Shechter's ttest_bonf in the file exchange in this link.
Download the ttest_bonf, then add it to your Matlab suing set path in the home menu and choose the folder that downloaded ttest_bonf.m is located in.
I don't use ttest_bonf yet but its syntax is like Kruskal-Wallis; Here is an example:
clear; clc;
A = randn(1000,1); % random NORMAL data set
B = randn(1000,1); % random NORMAL data set
AB = [A B]; % combine them in AB
% check if AB is normal or not using Kolmogorov Smirnov test
% [h, p, k2stat] = kstest(AB);
% ^^^^ since I'm sure my data are normal I [don't used kstest here (since I used randn
% to generate random data and randn always generates normal data')]
% we use ttest_bonf now.
[h,p,sigPairs] = ttest_bonf(AB)
Which gives you p-value, which you can control with any significant difference you want as I told you earlier. Also, it gains you h, according to the description of ttest_bonf in FEX:
% The null hypothesis is: "means of two treatments are equal".
% For TAIL = 0 the alternative hypothesis is: "means are not equal."
% For TAIL = 1, alternative: "mean of X is greater than mean of Y."
% For TAIL = -1, alternative: "mean of X is less than mean of Y."
% TAIL = 0 by default.
But wait, If you interested about to see what is the result of t-student test you can do this:
[h,p,ci,stats] = ttest2(A,B) % gives you h and p-value of t-student test between A and B.
Good luck
Más respuestas (0)
Ver también
Categorías
Más información sobre Hypothesis Tests en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!