Dear organisers, I would like to know which machine is used for the test phase, i.e. which CPU and GPU it has, RAM, VRAM, number of cores and other details. Thank you for your attention, André Ferreira

Created by André Filipe Sousa Ferreira ShadowTwin41
Dear @ujjwalbaid The MLCube submission is already done and accepted. Best wishes!
Dear organisers, Sorry to bother you again. Is there any way of knowing if the MLCube is correctly submitted? Thank you very much!
Dear @ujjwalbaid , I have just submitted the docker image and the MLCube. The local testing worked fine. Thanks!
Hi @ShadowTwin41, In the testing cohort, ~600 subjects will be present. We will reach out to you once your MLCUbe submission is done.
Dear organizers, As I mentioned before, I'm afraid that 100 GB of disk space may not be enough for my solution, depending on the number of testing cases. Could you tell us how many cases are in the testing set? Or do I have to reduce the amount of data created? This might have some complications and will reduce inference efficiency. Thank you very much!
Hi @lulululu, Currently, we are not accounting for any time limits.
@lulululu , Presently, a time limit has not been configured into the workflow, but @ujjwalbaid can confirm.
Dear @vchung the model we plan to submit requires about 12 minutes per case on a NVIDIA A100-SXM4-40GB - you mentioned above that you are not aware of any timeouts but we still would like to ask if this is okay?
Thank you for providing your benchmark, @ShadowTwin41 ! I will forward this along to the organizers and they may discuss further if some adjustments to the compute resources are needed. For now, please plan for the specs mentioned above.
@vchung It took around 2 hours and 30 minutes to run the 219 cases, and disk space of 33GB. I don't know how many cases are in the testing set, but if it is more than the triple of 219, please let me know. Or if you decide to let more disk space available. Thank you for your time!
@vchung I'm not sure how long it takes per case. I will build a complete pipeline and monitor the time for the whole validation set, including all the steps. When running the docker container is going to take longer to initiate everything, but afterwards it should be identical the time per case. I write here ASAP. Thanks again!
@ShadowTwin41, I am not aware of any timeouts at this time, but @ujjwalbaid can confirm! So we can have a benchmark, how long would you say your algorithm takes per case?
@vchung Thank you for your answer! The solution I'm developing now would need a disk space of around 250 GB (depending on how many test cases are, 250 GB are needed for the training set). I also would like to know if there is a timeout for running the inferences? 12 hours, or something like that. Thanks!
@ShadowTwin41 , Yes, that is 52 GiB memory. For disk space, we are starting at 100 GB (give or take system resources as well). The organizers will discuss further if more space is needed. Hope this helps!
@vchung When you say 52 GiB memory, you mean memory RAM? What ROM (disk space)? Does it have a timeout for running? thanks!
@ShadowTwin41 , Sorry for the delayed response! Your MLCube submission will be run in a cloud compute environment with the following specs: * 1 GPU (NVIDIA V100) * 8 vCPU * 52 GiB memory * 16 GiB GPU memory EDIT: typo fix

Test phase machine to run MLCube page is loading…