Hello Sage Team and all, We were able to successfully complete 569 of 570 cases! The one incomplete case failed due to a pytorch memory error: RuntimeError: CUDA out of memory. > May I request the container be tested again to see if the results are consistent? My reasoning is that I tested this container on a system with much more restrictive hardware than the amount allotted by Synapse (using toil?) and did not have any problems segmenting the validation dataset. I cannot reproduce the error with the data that I have which leaves me to speculate that this might be the results of over-provisioning the GPU memory. > Are we disqualified from the competition if we are not able to successfully segment all 570 cases? Thanks, -Richard

Created by Richard Barcus MrRichard
Hi @MrRichard, You would **not** be disqualified from the competition. We would assign a penalty score (Dice=0, Hausdorff = 374) for the one case that could not be evaluated.
Hi @MrRichard , I have re-run the model from submission 9716011 (docker.synapse.org/syn26023459/riipl_brats@sha256:4e3d9a52188bb1dacede058ccf74fae8d69244c65f3295c399a3476be0bb13dc) on the failing test case, and unfortunately, the PyTorch memory error still persists. If it helps, we have ensured that there are also no other processes running when we start the Docker container as well: ```bash $ docker run --rm -d \ --network none \ --runtime=nvidia \ -v $PWD/{hidden}:/input:ro \ -v $PWD/output:/output:rw \ docker.synapse.org/syn26023459/riipl_brats@sha256:4e3d9a52188bb1dacede058ccf74fae8d69244c65f3295c399a3476be0bb13dc $ nvidia-smi Thu Oct 14 21:31:43 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:00:1E.0 Off | 0 | | N/A 33C P0 49W / 300W | 1943MiB / 16160MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 6423 C /usr/bin/python3 1941MiB | +-----------------------------------------------------------------------------+ ``` @ujjwalbaid can follow-up if needed.

Questions about submissions that are not successful with all 570 cases. page is loading…