Expected Agreement

Although Kappa is probably the most commonly used match measure, it has been criticized. One of these criticisms is that kappa is a measure of exact agreement and treats approximate agreements in the same way as extreme disagreements. But for some types of data, a “Near Miss” may be better than a “far miss.” Although this is generally not the case when the categories evaluated are really nominal (as in our example of verbs versus non-verbs), the idea of a “near miss” makes more sense for ordinal categories. Also note that for a number of observations, the more categories there are, the more likely kappa will be. Even with our simple percentage agreement, we have seen that the collapse of adjectives and substants in a single category increases the “success rate.” Weighted kappa is a way to solve the “Near Miss” problem. Essentially, the weighting system can distinguish between relatively proximal categories of ordination and relatively distal categories of ordination. The differences of opinion that lead the different councillors to opt for non-identical but proximal categories contribute more positively to the measure of convergence than the differences of opinion due, which involve very different classifications of advisers. The probability that both by chance say yes is: Kappa is very easy to calculate, because the software is available for this purpose and is able to test whether the agreement exceeds random levels. However, there are questions about the share of chance or expected consent, that is, the proportion of time that evaluators would accept by chance. This term is only relevant where councillors are independent, but the apparent lack of independence calls into question their relevance. agreement between advisors and pre is the hypothetical probability of advisors indicating a random agreement. The formula was entered into Microsoft Excel and used to calculate the Kappa coefficient. Table 1 shows an image representing each step of the calculation.

Kappa coefficients are interpreted according to the guidelines defined by Landis and Koch (1977), with the strength of kappa coefficients being interpreted as: 0.01-0.20; 0.21-0.40 Fair; 0.41-0.60 moderate; 0.61-0.80 significant; 0.81-1.00 almost perfect.