The results of the independent test predictions have been [posted](https://www.synapse.org/#%21Synapse:syn5647810/wiki/415570). In short, we were unable to find evidence for significant predictive ability in Subchallenge 1, however Subchallenges 2 and 3 did show significant enrichment for predictive models. Follow the link above to see how you did.   We have scheduled a webinar for March 23rd at 9am Pacific to review the challenge results and discuss next steps for analysis for the Challenge manuscript. All teams who participated in the leaderboard round and who make their write-ups and code publicly available are eligible for either consortium or named authorship depending on contribution to the manuscript. Given we have reached the end of the challenge, we ask that you **please make your write-ups and code publicly available at this time**.     Please register for Respiratory Viral DREAM Challenge Update Webinar on Mar 23, 2017 9:00 AM PDT at: https://attendee.gotowebinar.com/register/5423764797210236163 To present the results of the Respiratory Viral DREAM Challenge to participants, and organize the paper writing efforts. After registering, you will receive a confirmation email containing information about joining the webinar.

Created by Solveig Sieberts sieberts
@sjahandideh- The results displayed on the website are for the RSV data set (gene expression array) only, since there was no signal in the Biochronicity (RNA-seq) data for any of the three subchallenges. I will be showing those results at the webinar if you're interested. Solly
Is it possible to calculate performance of models separately for protein and RNA-seq data? There should be reason for no predictive value in subchallenge 1. Thanks!
I would like to know if there is any _specific_ deadline for making the write-up and code public since the code should be cleaned up and the write-up should be updated and finalized.
Our plan is to try for Science Translational Medicine. Unfortunately, we are unable to release the outcomes data for either of the independent test data sets, by agreement with the data contributors.
In which journal will the challenge results be published? Has it been decided? It would also be nice to have the true labels available so that we can continue updating our models and post the most recent results into our write-ups and possibly manuscript. Thanks.
Yes, please feel free to update your write-ups and code to reflect your final models. We will not be ranking team performance other than the ones currently displayed (within subchallenge and timepoint for each data set).
Are we allowed to update our write-ups and code? Also will there be any rankings combining Leaderboard and Final rounds? Thanks.

Challenge Results and Webinar page is loading…