Borrar filtros
Borrar filtros

multcompare and ttest2

2 visualizaciones (últimos 30 días)
karlo gonzales
karlo gonzales el 2 de Mzo. de 2016
Comentada: Felix-Antoine Savoie el 21 de En. de 2019
Dear Friends,
I have a simple and very basic question regarding the P-value of multcompare and ttest2 function. As i understand, after anova we use PostHoc analysis to p-value between all pairs. I expected to have the same p-value using ttest2. BUT, their p-value is very different! would you please to help me to figure out the problem? thanks Karlo

Respuestas (1)

the cyclist
the cyclist el 2 de Mzo. de 2016
Editada: the cyclist el 2 de Mzo. de 2016
When you make pairwise comparisons among several groups (not just two), you are more likely to find a difference between a given pair, just by random chance.
The P-values you get from multcompare take into account. The P-values you get from running all the t-tests independently does not (because it doesn't "know" that you have run multiple comparisons.)
  5 comentarios
the cyclist
the cyclist el 3 de Mzo. de 2016
There are many possible solutions to the multiple comparisons problem. I don't know the algorithm that multcompare uses, and can't dig into it right now. There are references in the documentation. You could also type
edit multcompare
to see what the code does.
Felix-Antoine Savoie
Felix-Antoine Savoie el 21 de En. de 2019
Dear the cyclist,
You are correct that uncorrected multiple comparisons may lead to false positives (by chance). However, when using "multcompare" with the 'lsd' correction i(e, non-corrected ttest), I still get p-values that differ from those obtained from a standard ttest (not very different, but still). Do you have any idea as to why this happens?
Felix

Iniciar sesión para comentar.

Etiquetas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by