Dear sir, We read the rules of the evaluation. We found that it was unable to verify on the test set in advance. This makes it impossible to understand whether the direction of training is correct. Is it possible to publish the sample of the test set in advance without publishing the label? We submit the segmentation results directly to get the score. In addition, When will the final ranking announced? Thank you?

Created by Zhenliang Ni ZhenliangNi
We uploaded the python script [here](https://github.com/schnobi1990/robust_mis_2019/tree/master/evaluation).
Dear ZhenliangNi, to avoid tuning/overfitting of methods, we decided not to publish any test data. Instead, the participants submit docker containers with their methods to ensure a fair competition. The final ranking will be announced within the MICCAI workshop. Kind regards, the ROBUST-MIS organizers

Evaluation page is loading…