As training is enabled, we can experiment with different training schemas...currently 3 submissions limit per week might be not enough for some teams.. Another topic is when team member grows, more ideas can be tested, not sure if it is fair to add extra chances...

Created by Henry Niu niudd
There are several parameters that affect the number of submissions allowed in a challenge including the prevention of over-fitting and the compute load that we can afford. Challenge organizers are cautious when estimating the impact of the first one because there are a lot of efforts put in the organization of scientific challenge that is at stake. One solution that I'm looking at for this challenge is the implementation of the BayesBootLadderBoot algorithm described in this [manuscript](https://arxiv.org/pdf/1607.00091.pdf). I'll resume this work when I'm back from vacation next week. > perhaps as many as the previous limit of one per day. Thanks for sharing this value.
A unique and remarkable feature of the DREAM challenges, compared with more usual (e.g. Kaggle or CrowdAnalytix) machine learning contests, is that because the training_log.txt files are generated from a verification run on synthetic data and not on real data, the modeler receives no feedback whatever about the optimal (e.g. cross-validation) training parameters selected by the model in any given run. My feeling is that this lack of feedback makes training more difficult; whether or not it also makes over-training less likely is open to question. Thus, I would also favor an increase in the number of allowed submissions per week, perhaps as many as the previous limit of one per day.

Can we increase submission times limit? page is loading…