Dear Organisors, Thanks again for coming up with the great competition and congratulations to the top winners! May I have a question on the secondary metric in SC2? I wonder whether accessing the statistical significance of AUC could be technically possible. Perhaps the use of bootstrap sampling may be considered for this purpose? My rationale is that since all the top performing teams are so close on the primary metric (i.e no statistical difference), would it be more informative to have the same statistical evaluation on the secondary metric. Of course this is just a suggestion and thanks for the consideration:) Best, Jing

Created by Jing Tang jtjing
Hi all, WRT the two questions: 1) **Should we be assessing the significance of the secondary metric in SC2?** * All teams did a great job, and the performances of the top four teams' models in SC2 were very close. In this case, we followed the DREAM challenge convention to fall back to the second metric, without statistical significance, to formally break the tie. * As a side note, we try to choose metrics which are somewhat uncorrelated, so that the tiebreaking metric is "definitive". 2) **Will you be releasing the validation data?** * That data has not yet been released to the public by the BeatAML group. I'll see if there are plans to do so. Best, Jacob and the CTD^2^ BeatAML DREAM Challenge Admins
I think that is a good point, if evaluate bootstrapped concordance index as the primary outcome, there should be some consistency in evaluating the secondary evaluation metrics as well. Comparing ROCs is not a new tasks and R packages like "pROC" are designed to conduct a statistical test for different ROC curves.
Dear Organizers, Thanks a lot for the great competition. I was wondering whether the validation data would also be released such that we can test locally several of the intermediate models (as I am sure other teams would also be curious), which we built leading to our final submission. It would help us to internally evaluate the performance of various models we built, and gain insights about why certain models didn't perform well. As there is no way to validate the performance of these intermediate models, it would be really helpful if the validation data is also released. With Regards, Raghvendra

SC2 AUC page is loading…