How to calculate for significant difference between Cohen's Kappa values?

30 visualizaciones (últimos 30 días)
I have calculated the Cohen's Kappa value determining agreement between Test A and Test B, as well as Cohen's Kappa for agreement between Test A and Test C. What method would I use to calculate for a significant difference in Kappa values between agreement for A-B compard to A-C? Are there any existing scripts/functions available for this?
  2 comentarios
Jeff Miller
Jeff Miller el 8 de Sept. de 2021
Is there a single sample for which you have classifications on all 3 tests, or do you have tests A & B on one sample and tests A & C on a different sample? I think these two cases would have to be treated differently...
Leonard Hickman
Leonard Hickman el 13 de Sept. de 2021
Two separate samples, one sample that underwent tests A & B and one sample that underwent tests A & C.

Iniciar sesión para comentar.

Respuestas (3)

Jeff Miller
Jeff Miller el 14 de Sept. de 2021
As I understand it, the fundamental question is whether tests A & B agree better than tests A & C, beyond a minor improvement that could just be due to chance (or agree worse, depending on how the tests B and C are labelled). The null hypothesis is that the agreement between A & B is equal to the agreement between A & C.
The most straightforward test for this case is the chi-square test for independence. Imagine the data summarized in a 2x2 table like this:
% Tests agree Tests disagree
% A & B group: 57 17
% A & C group: 35 8
with total N's of 74 in the first group and 43 in the second group. MATLAB's 'crosstab' command will compute that chi-square test for you. See this answer for an explanation of how to format the data and run the test.
Cohen's Kappa is a useful numerical measure of the extent of agreement, but it isn't really optimal for deciding whether the levels of agreement are different for the two pairs of tests.
  1 comentario
Peter H Charlton
Peter H Charlton el 22 de Ag. de 2022
Editada: Peter H Charlton el 22 de Ag. de 2022
In case it's helpful, here is some example code for formatting data and running the test (adapted from the code here):
% input data (from above):
tbl = [57,17;35,8];
% format as two input vectors:
x1 = [repmat(1,[tbl(1,1),1]); repmat(2, [tbl(2,1),1]); repmat(1, [tbl(1,2),1]); repmat(2,[tbl(2,2),1])]; x2 = [repmat(1,[tbl(1,1),1]); repmat(1, [tbl(2,1),1]); repmat(2, [tbl(1,2),1]); repmat(2,[tbl(2,2),1])];
% run the test:
[tbl_new,chi2stat,pval] = crosstab(x1,x2);
% check:
if isequal(tbl,tbl_new)
fprintf('The cross-tabulation table was correctly generated')
end
The cross-tabulation table was correctly generated
And I think the following code is generalisable to an mxn table (using data from here as an example):
% input data (from above link):
tbl = [90,60,104,95;30,50,51,20;30,40,45,35];
% format as two input vectors
[x1,x2] = deal([]);
for row_no = 1 : height(tbl)
for col_no = 1 : width(tbl)
x1 = [x1; repmat(row_no, [tbl(row_no,col_no),1])];
x2 = [x2; repmat(col_no, [tbl(row_no,col_no),1])];
end
end
% run the test:
[tbl_new,chi2stat,pval] = crosstab(x1,x2);
% check:
if isequal(tbl,tbl_new)
fprintf('The cross-tabulation table was correctly generated')
end
The cross-tabulation table was correctly generated

Iniciar sesión para comentar.


Star Strider
Star Strider el 6 de Sept. de 2021
Editada: Star Strider el 13 de Sept. de 2021
I used Cohen’s κ many years ago. From my understanding, from reading Fliess’s book (and correspoinding with him), Cohen’s κ is normally distributed. An excellent (in my opinion) and free resource is: Interrater reliability: the kappa statistic . There are others, although not all are free.
EDIT — (13 Sep 2021 at 10:58)
To get p-values and related statistics for normally-distributed variables, the ztest function would likely be appropriate.
.

Ive J
Ive J el 12 de Sept. de 2021
You can build confidence intervals around your Kappa values, and then see if they overlap.
  2 comentarios
Leonard Hickman
Leonard Hickman el 13 de Sept. de 2021
While this would work for demonstrating significance and I have found info on calculating 95% CI for kappa, the journal I am submitting to would like a P value.
Ive J
Ive J el 13 de Sept. de 2021
You may want to take a look at this thread. Then you can calculate the z-score and get a p-value out of this.
pval = 2*normcdf(zvalue, 'upper'); % two-sided test

Iniciar sesión para comentar.

Categorías

Más información sobre Hypothesis Tests en Help Center y File Exchange.

Etiquetas

Productos


Versión

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by