Hello, we are preparing to submit a docker file for our project. We noticed, the training and validation data has the form: /input/BraTS2021_ID/BraTS2021_ID_flair.nii.gz /input/BraTS2021_ID/BraTS2021_ID_t1.nii.gz /input/BraTS2021_ID/BraTS2021_ID_t1ci.nii.gz /input/BraTS2021_ID/BraTS2021_ID_t2.nii.gz but, the manual for docker submission (https://www.synapse.org/#!Synapse:syn25829067/wiki/611500) shows test input like this: /input/BraTS2021_ID_flair.nii.gz /input/BraTS2021_ID_t1.nii.gz /input/BraTS2021_ID_t1ci.nii.gz /input/BraTS2021_ID_t2.nii.gz What form should we expect the input folder to have for our docker submission to run on the test data? Thanks Monibor

Created by Md Monibor Rahman Monibor
@kamleshp , Thank you again for your insights. After some discussion, we have moved towards updating the infrastructure hardware to a V100 to better emulate the specs used by the participants -- this should help with loading large models, such as yours. Please feel free to resubmit your model for evaluation with the updated GPU. For reference, you will have access to 1 GPU with 16 GiB vRAM (GPU memory), 8 vCPUs, and 61Gib RAM.
@kamleshp , Thank you for providing us with more details regarding the model loading time. We are currently running the workflow as one Docker run per case (rather than one total Docker run for all cases) as to individually time the segmentation time for each case. Given what you have provided, I will bring this up to the Challenge Organizers for further discussion.
Thanks @vchung Yes the container load time is the one aspect. But I would like to clarify further that I am pointing to the model load time, which is the time it takes to load model. model load time is the time required to load the model after running the main program (python run_model.py) which includes, for instance TensorFlow graph creation, memory allocation etc. If the model load time is large, the 390 sec might not be enough because most of the time was spend on graph creation and memory allocation etc. Alternatively, when the input format is /input/BraTS2021_ID/BraTS2021_ID_flair.nii.gz Then, I just load the model once and predict on all cases in a loop which avoids model load time for each case.
@kamleshp , Execution time is calculated when the container is run and when it is removed or when 390s has transpired (whatever comes first); we do not take into account the time it takes to pull the image. That being said, you are correct in that there is a difference between container created time and container start time. We are updating our workflow now so that the execution time is correctly reflecting this. Thank you for the insights!
@vchung If you are instantiating docker for each case then can you please clarify how will you calculate the time spend for prediction? Since instantiating docker run consist of two times: 1. First is the time spend on loading the model, which can be large depending on the model size. 2. Second is time spend on actual prediction. If the time (1) i,e. model load time is large (which is the case for my submission), we may not have enough time to predict all cases ? Time (1) will recur for all the cases which will make the total time spend on model load to be (number of cases* model load time) ?
@ShuLab , Yes, your first understanding is correct: your Docker model is expected to predict segmentations for one case ID only. We will handle mounting the input and output folders for you, as well as the step of creating a tarball of your output NIfTI files. If it helps at all, the Docker run commands would look something like this: ```sh docker run ... -v /path/to/BraTS2021_00001:/input:ro ... docker run ... -v /path/to/BraTS2021_00013:/input:ro ... ... ```
Hi @vchung , So do you mean that for our final submission, you will mount those (/input/BraTS2021_ID_flair.nii.gz /input/BraTS2021_ID_t1.nii.gz /input/BraTS2021_ID_t1ce.nii.gz /input/BraTS2021_ID_t2.nii.gz) four files directly to the input folder and run the Docker for once, and then empty the input folder and mount four files for another ID and run the Docker again? So you **won't** mount many folders of different IDs each containing these four files, and run our Docker for only once to get all the output. Please help me confirm my understanding of the data format. If I misunderstood, please correct me. Thank you!
Hello @Monibor , We apologize for the unclear instructions. For your final submission, we are mounting the individual case folders for you, so that the available input files per Docker run are: - /input/BraTS2021_ID_flair.nii.gz - /input/BraTS2021_ID_t1.nii.gz - /input/BraTS2021_ID_t1ce.nii.gz - /input/BraTS2021_ID_t2.nii.gz I hope this helps!

Docker Input data format page is loading…