Hi,
I saw the leaderboard results are completely different now, but I don't understand what is going on here? For now, our best prediction is from our first submission on 08/16 which was not quite good at that time, so we tried other methods to improve. And the later methods were not bad on previous leaderboard results, but now it is very bad. We have been using the score script you provided initially in our training phase. And maybe now the script is changed or what? Could you please clarify that? And if so, shouldn't we all get more submission chances?
Thank you.
Best
Di
Created by Di He DiHe thank you It's an issue that will have to be discussed among the organizers, but is something we can consider if it is determined to be a widespread issue. Thanks. Still no more submissions? The first leaderboard result directly changed our approach? And this modification will leads in inconsistencies in our preprocessing approach of the 2 time range data...since we gave up the initial approach of the first "bad" submission(now the best...) The AUPR and AUROC scores on the leaderboards have been corrected. The p-values are still running. If you're referring to the python AUPR and AUROC code we provided, that is correct. The error was in the portion of the scoring harness that matched the submissions to the gold standards via the SUBJECTID. But the issue here is that if we know that initial approach is the best one, we will go to that direction not the later ones. Sure, overfitting is big issue. But is not the primary task of leaderboard to provide the participants to select their models? We wasted our last 3 submission in a totally wrong direction and now we don't even know what the performance will be if we stick to our initial approach.
Besides, is the scoring script provided correct or not? If not, could you please update the scoring script? So that, we can try our models with the correct standard.
Thank you. The reason to limit submissions is to reduce overlearning and overfitting on the leaderboard given the small test set. You're free to use your first submission as your "official" if it gives you the best results. Just noticed the announcement, and no further submissions are allowed? But our issue is that apparently the correct leaderboard result shows that our first submission is the best, yet initially that submission is not good. So we totally changed our approach to narrow down the gene pool. And because our later submission results turned pretty good, we employed that approach for our last 3 submissions. And now everything is changed, basically means that we gave up our best approaches and wasted all last 3 submissions with 'shouldn't be' approach. I do not think it is fair to anyone who is really trying to making efforts in this challenge. And since you have extended the deadline to almost 20 days later, we have enough time to make right modifications, and updated accordingly. There is no reason why more leaderboard submissions are allowed. sorry i posted on the wrong forum when i received the automatic email alert..... i thought this is for the Disease Module challenge --- a common problem with participating in 5 challenges at the same time. Please see this thread: https://www.synapse.org/#!Synapse:syn5647810/discussion/threadId=800
We will be making an announcement shortly. i am having the same problem. i found everyone else's performance dropped by 50%, even for the ones with similar size of mine. but my performance dropped by 75%....
==========================
update: sorry i posted on the wrong forum when i received the automatic email alert..... i thought this is for the Disease Module challenge --- a common problem with participating in 5 challenges at the same time.