I thought I read somewhere on the wiki that the precision-recall statistics were going to be based on the approach taken in the paper "Precision-Recall-Gain Curves:
PR Analysis Done Right" by Flach and Kull. Now I don't see any mention of it. Flach and Kull's analysis seems strong and solves the interpolation problem that is causing the scikit-learn bug mentioned in another thread. Could someone clear up what the background is to the choice of statistic. It is hard to follow when things are seemingly so fluid. I did try Kull's R and python implementations of their statistic and unfortunately they are quite slow. Is this the reason it was dropped?
Created by John Reid Epimetheus The auPRG implementation had several bugs that we found before the challenge went live. They fixed a few but it still has problems. So we dropped it. it is incredibly slow, epim. not sure if that practically solves the tied problem, also.
it will take them like days maybe weeks to evaluate one submission. how can this run into this kind of heavy situation...
they could just use auroc for now, and put all the other evaluations in the post-challenge phase.
but according to my experience, they will keep on trying and fighting... they all have a great passion of science, especially statistics. and it's like one trying to fight against cancer, it's unlikely to change anything in the end... sometimes i am really really sorry for them...
**
in anyway, they will have to fix the precision recall calculation sooner or later, since they will need that in another important challenge.