Hello Everyone, Can I confirm my understanding of the guidelines? 1) Within Oct 9, we have to submit our model developed based on synthetic dataset as docker image? This will help us understand the workflow and make us familiar with docker file submission? 2) After Oct 9, our model will be trained and validated on data from 2008-2018 of UW data. We will not be involved during this phase other than fine-tuning our model based on published ranking and resubmit the docker file again? 3) We keep doing the iteration process till Jan 9th 2020. Am I right? 4) How long might it take for us to know the model performance on UW data after our submission? For example, if I make some iteration to the model and submit it as a docker file again, will I get to know the new ranking in 3-4 hours? Thanks for your time

Created by ssmk S ssmk
Hi @SSMK, > 1. Within Oct 9, we have to submit our model developed based on synthetic dataset as docker image? This will help us understand the workflow and make us familiar with docker file submission? We encourage submissions during this open phase so that you can learn in advance how to build and submit models to the challenge. This is also useful to us to stress test the infrastructure before the start of the challenge. For testing purpose, you can submit your model. The training and inference program included in your submission will be run on a subset of the synthetic data (synpuf). Because these data are open, you will receive the log file and the prediction file generated. As a reminder, synpuf data must not be used to train a meaningful model. These data are only used for validating the format of your submission. > 2. After Oct 9, our model will be trained and validated on data from 2008-2018 of UW data. We will not be involved during this phase other than fine-tuning our model based on published ranking and resubmit the docker file again? First, you model will still be run on a server that includes the synpuf data where we will validate your model. Once you model passes this validation, it will automatically start running on the UW data. Because the UW data are private, nothing generated by your training and inference program will be returned to you. The only exception is the performance of your model (AUC) that will be returned to you and published in a leaderboard. > 3. We keep doing the iteration process till Jan 9th 2020. Am I right? After receiving the score of your first submission, you may modify your algorithm and resubmit it. After processing your second submission, you will receive a second score. > 4. How long might it take for us to know the model performance on UW data after our submission? For example, if I make some iteration to the model and submit it as a docker file again, will I get to know the new ranking in 3-4 hours? It is difficult to answer this question. One reason is that we don't know what your program is doing and how fast it is doing it. The only thing I can say is that the baseline model that we provide takes about 1h to run on the UW data.

Guidelines for submission page is loading…