Dear Participants, We note that some of you experience your docker submissions failing on some of the testing cohort cases. We would like to inform the teams that fail in** 87** cases that these **87** specific cases are not accounted for the ranking of your team. We have explicitly included these 87 cases in the testing cohort, in order to perform some additional performance evaluation for the meta-analysis manuscript, and these are not considered for the performance evaluation of your method during this testing/ranking phase. Having said that, if your method fails on a different number of cases, you need to coordinate with us for any further action. Regards, The BraTS organizing committee

Created by Ujjwal Baid ujjwalbaid
Hi @jmarndi, We will provide all the details on this at the conclusion of the challenge at #RSNA2021. We have planned to make evaluation scripts with toy data at the conclusion of the challenge. Thank you for your active participation and patience.
Hello @ujjwalbaid and BraTS organizing committee, 87 (15%) cases couldn't be scored for the docker submission we (team @cubrats ) made, we are glad to hear that those cases have been excluded from final ranking. Having said that, we need as much information as possible to avoid this in future. Could you please provide more details that can help us debug this issue at our end. * How are the submissions being scored? Maybe you can provide sample code or even better a script containing the evaluation/scoring logic. * Can you also provide sample data for at least one patient on which we can replicate the issue. If you can provide one sample from actual test cases for which our docker submission couldn't be scored, that would be greatly appreciated. If that's not possible, a fabricated data sample would do as well, as long as the issue is reproducible using that.

Docker submissions failing on some of the testing cohort cases page is loading…