Hi @vchung @trberg , I have been struggling for the invalid submission caused by the time limit exceed these days. I have tried many times with more light-weight models or simpler inference strategies, however, none of these worked. My latest submission only needs 90 seconds for an inference of a image on my local server. Despite that my GPU (2080ti) is faster than K80, I really doubt that whether the speed gap between these two kind of GPUs can be that large. Is there any possible solution to this? Thank you.

Created by Haozhe Jia NPU_PITT
@vchung Thank you so much.
@NPU_PITT , Submission 9715935 has been invalidated. You may submit one more time before the deadline on Sept 23rd!
@vchung Could you please also cancel my previous submission (9715935)? Thanks.
@YLLAB , Apologies, I will mark them as invalid instead; you should be able to submit now.
@vchung After I tried, I still reached the limit of submit. Can you help me see what is the problem? Thanks, YLLAB
@YLLAB , I have closed submissions 9715776 and 9715894. You can now submit up to 2 times again until Sept 23rd!
@vchung Can you cancel the two Dockers I submitted (9715776, 9715894)? Due to hardware upgrade, I want to change my model. Thanks. YLLAB
@vchung, That is great. Thank you and all BraTS staffs so much for your amazing work. Cheers.
@NPU_PITT , We apologize for the troubles and inconveniences caused. After some discussion, we have moved towards updating the infrastructure hardware to a V100 to better emulate the specs used by the participants. Please feel free to resubmit this model for evaluation with the updated GPU. For reference, you will have access to 1 GPU with 16 GiB vRAM (GPU memory), 8 vCPUs, and 61Gib RAM.
Hi @vchung, I have to say that I really didn't expect the inference speed of K80 GPU is that slow. Well, I will try to further compress my model and simplify the inference. Thanks anyway.
@NPU_PITT , Thank you for your patience. We locally ran your model of submission 9715783 on a machine with the exact specs as the submission system and received the following stats: ```sh $ docker container inspect npu_pitt [ { "Id": "c762b40d91711e3675adb80239f327b431f7a8e94691a150a18c841fe88f669b", "Created": "2021-09-16T06:40:02.809351471Z", "Path": "python3", "Args": [ "-u", "run.py" ], "State": { "Status": "exited", "Running": false, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 0, "ExitCode": 0, "Error": "", "StartedAt": "2021-09-16T06:40:30.902513036Z", "FinishedAt": "2021-09-16T06:50:13.793114617Z" }, ... } ] ``` Using the `StartedAt` and `FinishedAt` values, we were able to calculate the model's execution time of one case at ~583 seconds: ```python >>> from dateutil import parser >>> start = parser.isoparse("2021-09-16T06:40:30.902513036Z").timestamp() >>> end = parser.isoparse("2021-09-16T06:50:13.793114617Z").timestamp() >>> end - start 582.8906009197235 ``` The end of your container STDOUT showed a similar execution time: ```python ... finished: 00001 processing time: 0:09:38.558497 ``` While running your container, we also kept watch of the GPU device to ensure that it was being utilized: ```sh $ nvidia-smi -q -g 0 -d UTILIZATION -l ==============NVSMI LOG============== Timestamp : Thu Sep 16 06:44:21 2021 Driver Version : 470.57.02 CUDA Version : 11.4 Attached GPUs : 1 GPU 00000000:00:1E.0 Utilization Gpu : 99 % Memory : 34 % Encoder : 0 % Decoder : 0 % GPU Utilization Samples Duration : 16.58 sec Number of Samples : 99 Max : 99 % Min : 98 % Avg : 98 % Memory Utilization Samples Duration : 16.58 sec Number of Samples : 99 Max : 47 % Min : 7 % Avg : 17 % ENC Utilization Samples Duration : 16.58 sec Number of Samples : 99 Max : 0 % Min : 0 % Avg : 0 % DEC Utilization Samples Duration : 16.58 sec Number of Samples : 99 Max : 0 % Min : 0 % Avg : 0 % ```
Thank you for the feedback, @NPU_PITT , @YLLAB . We are looking into this issue now.
I am also confused about this.

Invalid submission caused by the time limit page is loading…