java.lang.Object
org.elasticsearch.xpack.core.ml.dataframe.evaluation.common.AbstractAucRoc
All Implemented Interfaces:
NamedWriteable, Writeable, org.elasticsearch.xcontent.ToXContent, org.elasticsearch.xcontent.ToXContentObject, EvaluationMetric
Direct Known Subclasses:
AucRoc, AucRoc

public abstract class AbstractAucRoc extends Object implements EvaluationMetric
Area under the curve (AUC) of the receiver operating characteristic (ROC). The ROC curve is a plot of the TPR (true positive rate) against the FPR (false positive rate) over a varying threshold. This particular implementation is making use of ES aggregations to calculate the curve. It then uses the trapezoidal rule to calculate the AUC. In particular, in order to calculate the ROC, we get percentiles of TP and FP against the predicted probability. We call those Rate-Threshold curves. We then scan ROC points from each Rate-Threshold curve against the other using interpolation. This gives us an approximation of the ROC curve that has the advantage of being efficient and resilient to some edge cases. When this is used for multi-class classification, it will calculate the ROC curve of each class versus the rest.
  • Field Details

    • NAME

      public static final org.elasticsearch.xcontent.ParseField NAME
  • Constructor Details

    • AbstractAucRoc

      protected AbstractAucRoc()
  • Method Details

    • getName

      public String getName()
      Description copied from interface: EvaluationMetric
      Returns the name of the metric (which may differ to the writeable name)
      Specified by:
      getName in interface EvaluationMetric
    • percentilesArray

      protected static double[] percentilesArray(Percentiles percentiles)
    • buildAucRocCurve

      protected static List<AbstractAucRoc.AucRocPoint> buildAucRocCurve(double[] tpPercentiles, double[] fpPercentiles)
      Visible for testing
    • calculateAucScore

      protected static double calculateAucScore(List<AbstractAucRoc.AucRocPoint> rocCurve)
      Visible for testing