Hi Thomas, I see everything changed on leaderboards, all scores and ranking changes for subchallenges 1&2. Thanks, SJ

Created by S J SAJA
Samad- I'm very sorry for the inconvenience and that you're disappointed with your corrected score. As you can see in the Files, we have provided the scoring functions used. You're welcome to inspect them yourself. This particular bug occurred in matching the SUBJECTIDs and did not occur in those scoring functions, however, and the current scores have been verified by my independent implementation in R. You will also be able to verify your scores yourself when we release the true values for the test data, which will occur when we release the independent test data. Solly
Solly, If "not every submission was affected" then leaderboard guided a few groups and misled others. Sorry but I have to say it was not a DREAM Challenge. However, still I think scoring is not correct for those groups that have been on top and now scores are in random prediction range. I think somehow SAMPLEIDs are not correctly assigned to predicted values. There was similar problem, kind of bug in scoring script of DREAM 10 - ALS Stratification Prize4Life Challenge, you can review our discussion there. I think providing standard scoring scrips from previous challenges can be good idea for the next challenges. Also, having a baseline model submitted to the leaderboard can fix this type of issues. Thank you very much for being responsible and responding to questions! Samad
If the submission IDs you gave me refer to the files you submitted via the submission queues then those are your scores. The bug had to do with aligning the SAMPLEIDs in the prediction files to the SAMPLEIDs in the Gold Standard files, thus not every submission was affected. I'm very sorry for the confusion caused, but we are confident in the current scores. We will be giving participants a chance to update their submission materials based on the current leaderboard information. Solly
Thank you Solly for hand scoring. I see scores are 20% worse than previous leaderboard and maybe submissions are not correctly assigned. You may can double check the submitted files that are available from our project page. The file that script using to calculate outputs could be different. I am guessing there is a problem with our scores because 1) we are getting 20% worse score compared with previous leaderboad, and at the same time there is some submissions from other groups that did not change (if bug was in scoring script then all scores should change even little), 2) We are pretty sure that we learned something from our test submissions but our final submission are like random prediction or worse and test submission are better. Thanks, S
These are the scores I get when I hand score them: SubmissionID | AUPR | AUROC -|-|- 7208294 | 0.4752490 | 0.4271886 7207921 | 0.5107437 | 0.5454545 (Note that my code produces minor difference in the PR curve from the "official" scoring code.)
Solly, These are our final submissions: Up to hour 0 ID: 7208294 Up to hour 24 ID: 7207921 Also you can find details in our write-up. Thanks, S
Please send me the submission ID and the subchallenge you're referring to, and I will hand-check the score. Solly
Thomas, I believe our new scores are not correct. Our best prediction for sub-challenge 2 was in the second place and now is like a random prediction and even worse than our test submits. I dont know what kind of bug was that but I see scores of some submissions did not changed and our scores are like random prediction. Can you please have a close look at our outputs? If new scores are correct we mislead by leaderbord test submission results. our team is: ES.SJ_PREDICTOMIX. Thanks, Samad
Dear Samad, That is correct. The scores are now correct, but the p-values are still calculating. Apologies for the inconvenience. Best, Thomas
I believe the scores are now correct, but the p-values may still be running. Is that correct, @thomas.yu?
Solly, The scores showing on the leaderboard is after fixing the bug? Some groups still have the same scores but our scores changed alot and looks like random prediction. I hope the bug is not huge and not make solvers confused like us at the moment. Thanks again for organizing a very interesting challenge. - Samad
Samad- A bug has been found in the leaderboard scoring for Subchallenges 1 and 2. All the scoring and permutation p-values are currently being re-run. We will be giving participants an opportunity to update their submission materials to select different "official" submissions in light of this information. An announcement will be made shortly. Solly
Dear Samad, Apologies for the inconvenience. There will be an announcement made about this. Thank you for your patience. Best, Thomas

Something going wrong with leaderborad? page is loading…