I'm wondering if there are any updates of Round 1 being extended since there have been significant delays in training and no one appears to be on the leaderboard with only 10 days remaining in this round.
Thanks
Created by Michael Kawczynski MichaelK Hi All,
Round 1 is still scheduled to end on December 23 (inference submissions sent on time will continue to be evaluated until we can no longer delay the update of the servers). The duration of Round 2 and 3 may however be extended based on your feedback (a short questionnaire will be sent).
Thanks for your contribution to this Challenge! call for the extension. I think that both Yaroslav's an Yunafang's suggestions are fair.
I think that giving three more submissions (or allowing to replace / improve the submissions) for teams who have already submitted the file is fair, and also providing priority for teams who weren't able to do so due to bottlenecks.
However, I agree with Yuanfang's comment that the chances of submissions are extremely slim. I am also for the extension of the first round after the break for the system maintenance. I would also suggest several things:
- fix bugs like [this one](https://www.synapse.org/#!Synapse:syn4224222/discussion/threadId=1266).
- add priority queue for submissions. Right now several teams have already used all their training time but instead of letting the others possibility to train their model you are just giving more time to all of us. It resembles a disorganized throng right now instead of fair resource sharing. I am really tired of waiting for log's updates even in Express lane (!). For example, I change something in my script, I want to test it on the Express Lane to next submit it to the general lane, and all the 20 minutes my model is just waiting without any update!
- As a more general point, can you authorize us to download preprocessed data? What is the whole point about keeping everything in the cloud? I believe that we all want to advance the state-of-the-art and make more accurate screening, but, personally, I spend 70-80% of my time fighting with the cloud environment rather than working on the model. Just to fix a bug in the code becomes a whole adventure with rebuilding and pushing a Docker image, then copy-pasting its sha-code to the submission file, then uploading the file and finally clicking several time to just put it into the queue! Moreover, when a bug is more serious it is just impossible to fix it without writing on the forum, as, for example, with the [Tensorflow version](https://www.synapse.org/#!Synapse:syn4224222/discussion/threadId=1384). Am I the only one who find it very difficult to work? i am absolutely fine with extension too. but speaking from someone who has already submitted and already using up qua to soon, to meet the deadline i submitted a sample code without even knowing each patient can have multiple images, then the organizers must
1. ** give those who already submitted additional 3 submissions, starting 23rd.**
2. ** such teams should be given more compute hours after that**, because we have planned to use up the 332 hours exactly on 23rd.
then you can extend as you like, but then **no participant can complain submitting 6 or 8 versions and assembling the best, as this is what you asked for.**
finally, let me remind the organizers that in a recent challenge i strongly opposed extension, but the organizers still extended, and eventually, the complaint of extension comes from** the EXACT TEAM WHO ASKED FOR EXTENSION **. if you extend, **when do you stop? at any time point there will be teams newly joined and not finishing..... then how do you justify allowing A but not B finishing.** one needs to learn from his mistakes. i f you don't extend, no one can blame you with good reason, if you extend both the ones who ask for extension and those who ask not, will blame you for very good reason of disorganization.
I second the call for an extension. Is Round 1 still scheduled to end on December 23rd? I support extension because the Python code for Tensorflow is buggy. Such inconveniences caused us to lose time trying to figure out what the error is. It is harder to find the real cause in a docker based system. i am fine with extension. just some pros and cons to consider (actually only cons):
but then correspondingly to be fair, all participants who have submitted must be given 3 submission opportunities starting from the extension date, because there were several teams who didn't know there could possibly be an extension, and submitted very preliminary versions and then were told they are **done, no replacement**, as can be seen in previous discussions on this forum. **these teams would otherwise be penalized for being good citizen**. at least i just submitted off-the-shelf code downloaded from other example files; the first 3 rounds should be for people who do this for a life anyway.
but then you will run into the problem that some teams already have 3 submissions and now can make predictions based on three or even 6 feedbacks... obviously ensemble the top 2 of the 6 would be the next 1 submission.
to avoid such problems, i think the best way is to proceed to round 2, but starting round 2, no email feedback of scores until the round ends, and then display everything on leaderboard. so the same time you get your score and other's scores. this would allow people to extend a round without any issue. you can add a round 4, if needed. I'd also like to see an extension if possible. Just figuring out how the whole system works has cost quite a bit of time. Because the training data is huge, processing the files and training models take a lot of time too.