The final scoring queues are now open for submissions. Please see the [wiki](https://www.synapse.org/#!Synapse:syn5647810/wiki/412148) for instructions. Please be careful to submit to the correct queues, so that your entries can be scored.
Created by Solveig Sieberts sieberts @kevin.bleakley -
We will be scoring one model per team per round. Thus, we will score one model for T <= 0 and one model for T <= 24. While it is not compulsory to submit a model in any round, we would find it useful and encourage you to do so. Just to be clear, scoring one model per subchallenge means that for e.g., Subchallenge 2, we are allowed only to submit for the T less than/equal to 24 hours, if we think it will perform better than our T = 0 model? i.e., it is not compulsory to provide a first submission before the 31st of January? Maybe it could be useful to take 2 or 3 submissions per sub-challenge instead of the last one for this small-sample-sized problem. I understand that concern. This is a continual issue in DREAM challenges. Unfortunately, in the post-hoc analyses we pay a power penalty the more submissions are made, so there is a trade-off between being more inclusive in order to make sure we capture the "best" model, and not being so inclusive that we cannot overcome the multiple testing burden. Let me consider this, and poll the organizers and other participants on the subject. The difficulty here is for the RSV set we only have around 20 samples in our training set. There is no Leaderboard data for RSV. Furthermore, the Biochronicity is an RNA-seq experiment and we don't know how the existing samples will perform on that set. If we are blinded then we cannot compare one model over the other for these specific experiments reliably. I think it becomes a little bit of a chance game. Zafer-
We will only be scoring **one** model per team per subchallenge. Because participants are blinded to the results, we will accept an unlimited number of submissions, but only the last one will be scored. Thus, if you make a mistake or decide to update your model, you may do so as many times as you wish. However, you need to make sure that your preferred model is the last one submitted. I hope that clarifies the situation.
Solly On that wiki page it says we can submit as many as we want. I am guessing that those multiple submissions (expect for the last one) will not be scored as in the Leaderboard experiment right? Practically I don't see the benefit of making multiple submissions. Wouldn't it be better if we had three submissions per each round as in the leaderboard phase so that we can compare different models?