Dear organizers, Thank you for organizing this challenge. I have some questions that need to be clarified. 1. Is there any public leaderboard during the validation phase? The ultimate goal of this challenge is to identify the state-of-the-art segmentation method for glomeruli identification and a leaderboard could help us to see/push our preliminary limits. 2. Can we use the validation data to train the models for the final submission? 3. Could you share the details of the docker evaluation platform? For example, CPU/RAM/GPU specs? 4. From [this discussion](https://www.synapse.org/Synapse:syn54077668/discussion/threadId=11115), we understand that we're permitted to submit two trained models. Does this mean a maximum of two trained models for each task? 5. The segmentation mask for Task 2 should be at 20X magnification. We understand that the test images are distributed in the same way as the validation set, with the same disease model and data organization. Given that, for example, the 12-174_56Nx is 80x digital magnification, does this mean we need to downsample so that the predicted mask must be 20x digital magnification? **Additional comment on the permitted two trained models submission.** I propose we should limit the model's size (e.g., total parameters of all models < 250M) instead of the number of permitted models (in question 4). For example, one could create a `torch.nn.Module` that combines multiple models. When saving this in PyTorch, it would become 1 model. Will this be allowed?

Created by Capybara capybara
Thanks a lot for clarifying!
Yes, you can. If you are eventually on the top performance of the challenge, you will be asked to share the code and model. Thanks
@huoy1 Thanks so much for your answer! Still, there is one question that has not been confirmed. This is important, please help me to confirm it when you have time. > So we're allowed to create a `torch.nn.Module` which combines multiple models (e.g., multiple U-Net models), sorry for asking but can you confirm this?
You can test your model using the validation docker that we released. For testing phase, it will be old school. You will send us the docker, and we will test the docker on the withheld testing data.
@huoy1 > We will release the testing data after closing this challenge. So people can continue work on this problem. Thanks! Understood that we can't do anything during the testing phase. According to the timeline, model submission due is on **Aug. 1st**, when do we expect to see the submission server open? > So we're allowed to create a `torch.nn.Module` which combines multiple models (e.g., multiple U-Net models), sorry for asking but can you confirm this? Can you confirm this, please?
We will release the testing data after closing this challenge. So people can continue work on this problem.
@huoy1 Thanks a lot for clarifying! > We will not have a leaderboard during the validation. But we will have that in the testing phase. The testing phase is from **Aug. 1st - Sept. 15** and we cannot submit the model **after Aug. 1st**, am I correct? If yes, I think there will be a risk that the top-performing model among all submissions might not be the best possible model for this challenge since we were not able to further push the performance. Please correct me if I'm wrong. > For the additional comment, we don't limit the model size for now. So we're allowed to create a `torch.nn.Module` which combines multiple models (e.g., multiple U-Net models), sorry for asking but can you confirm this?
Thanks for the questions "Is there any public leaderboard during the validation phase? The ultimate goal of this challenge is to identify the state-of-the-art segmentation method for glomeruli identification and a leaderboard could help us to see/push our preliminary limits." We will not have a leaderboard during the validation. But we will have that in the testing phase. "Can we use the validation data to train the models for the final submission?" Yes, you can use the validation however you want. "Could you share the details of the docker evaluation platform? For example, CPU/RAM/GPU specs?" The workstation that we will use for evaluation will have 48 GB RAM and 48 GB GPU card (A6000). "From this discussion, we understand that we're permitted to submit two trained models. Does this mean a maximum of two trained models for each task?" Yes, you can submit two models for each task. "The segmentation mask for Task 2 should be at 20X magnification. We understand that the test images are distributed in the same way as the validation set, with the same disease model and data organization. Given that, for example, the 12-174_56Nx is 80x digital magnification, does this mean we need to downsample so that the predicted mask must be 20x digital magnification?" When you generate the final binary mask for evaluation, it should be in 20x. For the additional comment, we don't limit the model size for now. Thanks

Some questions regarding the submissions page is loading…