Dear Organizing Team, Regarding the four sub-competitions, could you kindly specify which four categories they entail? From my understanding, we are evaluated on our ability to perform either part-level or type-level instrument segmentation with or without knowing which doman the images come from during testing? Additionally, I would like to know the evaluation metrics that will be used to assess the final results and how. I am also confused about the attributes.csv file in the dataset, and unsure about its purpose. Could you please provide some insight into the significance of this file and how it relates to the overall competition task? Thank you very much for your attention.

Created by yeep
Thanks for the question! Regarding attributes.csv: Just data the description to the data page. It contains some information on the image quality. Feel free to use it during training if you think it helps you in any way! You can find the information on what to submit for each task here: https://www.synapse.org/#!Synapse:syn47193563/wiki/621970 As metrics: for parts segmentation we will use DICE/F1 score and the Normalized Hausdorff instance. For each metric, we will rank the participants and each teams average rank will be their final rank. For instrument type: We will compute the map the detected and the actual types. I hope this helps! Best, Sebastian

about evaluation details page is loading…