Hi,
sklearn "average precision score" is giving us aucpr ~0.60 and your code is giving us aucpr >0.95.
I though you mentioned the script has been fixed. Do you know what the issue could be?
Thanks
Created by Raquel Norel rnorel Yes. The evaluation code is nearly the same as the code [here](syn11254333). (But testing on the test set, rather than the training set, of course). So, is the evaluation metric fixed? Is it the one in the code?
In "3.5 - Submission Requirements and Instructions" it reads that the evaluation metric is not yet announced ("Subchallenges 2.1-2.3: TBA").
It is important to settle this issue as it affects the selection of features.
The released scoring code will give you extremely inflated results in most cases. I would use it to check that your features are trainable upon, rather than as a measuring stick of how well your features perform. I think their current code is actually fixed. I get expected auprc, which can go very very low when baseline # positives is low.
But sklearn's estimated performance is elevated by the range of 0.02 to 0.08 using this challenge data, that is the thing they fixed.
Drop files to upload
has the scoring script been fixed? page is loading…