yzhao062 / pyod

A Comprehensive and Scalable Python Library for Outlier Detection (Anomaly Detection)

Home Page:http://pyod.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ROC vs. AUC

SaVoAMP opened this issue · comments

commented

Hey,

I was trying to find anomalies using IForest.
Is there a particular reason why the evaluate_print function shows a ROC score? It's a bit confusing to me since the receiver operating characteristic (ROC) refers to a curve that plots the recall over the false positive rate and therefore cannot be described by a scalar value.
I guess you are referring to the area under curve (AUC)? If that is the case, I would suggest calling it AUC instead in order to avoid confusion.
What are your thoughts on that?

Thanks for checking. it is a common practice to call ROC-AUC as ROC in ML research :)