djarenas / Inter-Rater

Inter-rater quantifies the reliability between multiple raters who evaluate a group of subjects. It calculates the group quantity, Fleiss kappa, and it improves on existing software by keeping information about each user and quantifying how each user agreed with the rest of the group. This is accomplished through permutations of user pairs. The software was written in Python, can be run in Linux, and the code is deposited in Zenodo and GitHub. This software can be used for evaluation of inter-rater reliability in systematic reviews, medical diagnosis algorithms, education applications, and others.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Inter-Rater

Inter-rater quantifies the reliability between multiple raters who evaluate a group of subjects. It calculates the group quantity, Fleiss kappa, and it improves on existing software by keeping information about each rater by quantifying how each rater agreed with the rest of the group. This is accomplished through permutations of rater pairs.

The software was written in Python and the code is deposited in Zenodo and GitHub. This software can be used for evaluation of inter-rater reliability in systematic reviews, medical diagnosis algorithms, education applications, and others.

This software has been used in the following academic work:

Arenas, Daniel, et al. "Cocaine, cardiomyopathy, and heart failure: A systematic review of clinical studies and meta-analysis of effect sizes." APHA's 2019 Annual Meeting and Expo (Nov. 2-Nov. 6). American Public Health Association, 2019.

Arenas, Daniel J., et al. "A systematic review and meta-analysis of depression, anxiety, and sleep disorders in US adults with food insecurity." Journal of general internal medicine (2019): 1-9.

Arenas, Daniel Jose, et al. "Negative health outcomes associated with food insecurity status in the United States of America: A systematic review of peer-reviewed studies." (2018).

Arenas, Daniel Jose, et al. "Systematic Review of Patient-Centered Needs Assessments Performed by Free Health Clinics." Journal of Student-Run Clinics 5.1 (2019).

About

Inter-rater quantifies the reliability between multiple raters who evaluate a group of subjects. It calculates the group quantity, Fleiss kappa, and it improves on existing software by keeping information about each user and quantifying how each user agreed with the rest of the group. This is accomplished through permutations of user pairs. The software was written in Python, can be run in Linux, and the code is deposited in Zenodo and GitHub. This software can be used for evaluation of inter-rater reliability in systematic reviews, medical diagnosis algorithms, education applications, and others.

License:GNU General Public License v3.0


Languages

Language:Python 100.0%