I downloaded "evaluation_metrics_python2.py" from "IDG-DREAM Drug-Kinase Binding Prediction Challenge" (see the picture below) to measure the model's performances, in which the definitions of "rmse", "pearson", "spearman", "ci", "f1" and "average_AUC" are provided.
${imageLink?synapseId=syn18080745&align=None&scale=100&responsive=true&altText=}
However, I have a question about how the average_AUC is computed. Please refer to the picture below:
${imageLink?synapseId=syn18080746&align=None&scale=100&responsive=true&altText=}
For input y (true value) and f (predicted value), y is converted into binary form before computing roc_curve, but f is kept as its original type, which is continuous real value. I think f might need to be transformed in certain way before roc_curve calculation, so that it is similar to the probability estimate of the true class.
Could you please help me understand the code of average_AUC?
Created by Huan Xiao huan.xiao I see! Thank you! Hi,
I think that the current way AUC is calculated is correct, but there could be other approaches to calculate it as well.
In our approach ("evaluation_metrics_python2.py"), y_binary corresponds to the true binary compound-kinase interaction labels (given a certain interaction threshold), whereas f stores the predicted non-thresholded measures of interactions.
Drop files to upload
Is the code "average_AUC" provided from "evaluation_metrics_python2.py" correct? page is loading…