The leaderboards are open for submission. Please see the challenge website for [submission instructions](https://www.synapse.org/#!Synapse:syn5647810/wiki/402359). Please note that teams are limited to 4 submissions. Please use them wisely.   We have scheduled a 3rd Webinar for Aug 9, 2016 9:00 AM PDT. Please register at: https://attendee.gotowebinar.com/register/7505131507389097731 At this webinar, we will discuss analysis topics including the permutation strategy for the Monte Carlo randomization analysis. After registering, you will receive a confirmation email containing information about joining the webinar.

Created by Solveig Sieberts sieberts
@thomas.yu thaks!
Dear @maria_s You should be able to see your results now on the leaderboard. I apologize for the inconvenience. Best, Thomas
Subchallenge 1 up to 0hrs. Syn7180849. We did get an email that submission went through fine. Thanks!
@maria_s, Could you please let me know which Subchallenge and the synapse ID of the file you submitted. Also, did you receive a confirmation email? Thanks, Solly
Dear Solveig, We made a test submission yesterday but don't see results on scoreboard. Can you please advise? I've looked by the syn project number from where we submitted the results. Many thanks!
Solveig, Thank you very much for your quick reply! I totally missed the mail with the post-webinar information ^^ Your two posts are very helpful :-)
I have posted a transcript of the Q&A here: https://www.synapse.org/#!Synapse:syn5647810/discussion/threadId=688
Morgan- The webinar link was sent out via email. If you do not have your Synapse account linked to an email that you monitor regularly, it may be a good idea to do so, but my apologies for not also posting on the website. It is available here: https://drive.google.com/file/d/0B_mfy0igbErATW5tbTM1VzV2aVk/view?usp=sharing
Dear Solveig, No member of our team was able to catch up with the 3rd webinar, and I did not find any post-webinar information afterwards. Do you plan to release a recording of the 3rd webinar, as it was done for the previous one? This was very convenient (thanks to the team for the work and the transcript of Q&A). Best,
Samad- Outcomes are static, not time-dependent, measurements. They measure whether they ever get sick (or the degree to which they get sick) following exposure. Only the gene expression profiling is time-dependent in this predictive modeling exercise.
Hi Solly, Why there is only 23 rows in "Submission sample" files? In the four testsets, there is 166 different sets of genetics data which are from single SUBJECTIDs at different timepoints. So, for example for subject A in times -20,0,10,20 hours we have different sets of genetics data, and outputs, i.e. binary shedding and symptomatic and continues log-symptom score, must be different. How we have to reduce multiple predictions for a single subjectid to a single predictions? is there any specific timepoint we have to predict the output for that? Thanks, Samad
Leaderboard submissions are due by August 31st, and permutations are due by September 14th. This information can be found on the [Challenge Timelines](https://www.synapse.org/#!Synapse:syn5647810/wiki/399111) page.
When is the deadline for Community Phase leaderboard predictions?
Mohammad- After internal discussion, we feel that LOOCV is the best approach. However, if you truly can't do LOOCV you may replace it with k-fold CV where k is sufficiently high. You must also document this in your writeup, including a justification as to why LOOCV was not feasible, and documentation of your methodology and actual fold splits (provided as a separate file in your project). Also, could you link me to the wiki where you see that language? It is out of date, and I'd like to correct it. Thanks, Solly
Hi Solly this is from the wiki : ``` 5. Leave-one-out Cross-Validations in the Training Data Leave-one-out cross-validations (LOOCVs) must be provided on the training data from your final (phase 4) model. To do so, you must generate a prediction for each subject in the training set based on the remaining subjects. ``` I just want to iterate that using LOOCV on the whole training set 'n-1' can be computationally challenging ,if someone is using fancy algorithm for producing the feature. The algorithm for generating feature has to run at least n times and it will be time consuming. So I just want to ask whether it is possible to use k-fold cross validation instead of LOOCV. Thanks Mohammad
Joshua- That is correct. There are currently no scored submissions. Solly
The leaderboard doesn't seem to show any information. Has anyone actually submitted a model yet? Just curious to know how other teams are doing.
Ka Yee- 1. Yes, your 2 official submissions are counted in your 4 total submissions. In other words, you can have 2 test submission in addition to your official submissions. You may count any 2 of the 4 as your official submissions as long as one is time 0 and one is time 24, but you must specify in your write-up which of your submissions are the official ones. 2. CVs are required only for your 2 official (time 0 and time 24) submissions. Hope that helps.
I have a couple of questions regarding submissions: 1. Is the final submission counted in the 4 submissions allowed for each team? If yes, is the last submission based on time line considered to be the "final" submission? 2. Is the leave-one-out cross validation required for each of the 4 submission attempts? thank you, kayee
Yes, of course. That makes a lot of sense. Thanks!
Chi- I'm happy to clarify. You may keep your code and write-up private from other participants until after analysis of the Test Data, however you must share your code with the Challenge Organizers upon submission of your models. I hope that explains things. Solly
Hi Solly, Could you provide a clarification regarding Submission Requirements of Reproducible Code? Wasn't there previous mention of Code being spared from Submission during the Community Phase in case future Test data became available?

Leaderboards open & Webinar 3 Registration page is loading…