DannyF46 / Inter-rater-Agreement-Statistics

A calculator for two different inter-rater agreement statistics, generalized to any numbers of categories

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Inter-rater Agreement Statistic calculator

Overview:

Inter-rater agreement statistics provide a measure of how well two sets of two (or more) categories agree with each other.

For example, say you wanted to determine if a novel medical test is reliable. Your new test either diagnoses a patient with some condition (category A) or it doesn't (category B). To test its reliability, you can compare it with a standard test that is already known to be valid (i.e. a 2nd set of A and B categories). A more reliable test will have better agreement with the standard test (more AA and BB entries), and thus the agreement coefficents will be closer to 1. If no correlation is present (similar amounts of AA and BB as AB and BA), then the coefficients are close to 0. If there is a negative correlation between the two tests (more AB and BA entries), the coefficients are closer to -1.

Use:

Run the script, select a number of categories, and enter your data into the array of cells that appear. Press Enter or click GO to recieve two sets of inter-rater agreement statistics, rounded to the nearest decimal of your choice:

About

A calculator for two different inter-rater agreement statistics, generalized to any numbers of categories

License:MIT License


Languages

Language:Python 100.0%