Repost @ialsag01's question in a new thread: I regret to inform you that my submission 9730551 has failed with the following reason: 17 file(s) not found : 'ds1c_p10k_n2_imputed.csv', 'ds1c_p10k_n3_imputed.csv', 'ds1c_p20k_n1_imputed... For this part, I am following the example in https://github.com/Sage-Bionetworks-Challenges/multi-seq-challenge-example-models/tree/main/task1/py-deepimpute And my docker file is: ``` FROM ubuntu:20.04 RUN apt-get update -y RUN apt-get install software-properties-common -y RUN apt-get install python3 -y RUN apt-get install python3-pip -y RUN pip3 install torch RUN pip3 install scanpy RUN pip3 install numpy==1.22 COPY src/* ./ ENTRYPOINT ["python3", "/run_model.py", "-i", "/input", "-o", "/output"] ``` I can't pin point the reason, could you please support me on this matter? Many thanks

Created by Rongrong Chai rchai
Hi @ialsag01 and @chentsaimin, FYI, we have made the improvement on logging the errors and the error for overloaded memory usage now is also captured, along with the error for runtime limit. We are also returning the tree structure of the output folder in your log folder. I hope it can help you better understand the errors and ease your debugging process. Thank you.
Hi @ialsag01, Yes, the runtime limit has been extended to 12 hours. The submission '9730742' was likely stopped due to the overloaded memory usage. If the submission reached the runtime limit, you should get the error message from the submission's logs, saying like "Submission time limit of 12h reached". Thank you.
Hi @rchai , My last submission (9730742) has failed exactly after 6 hours, I believe this time has been extended to 12 hours. Could you please check? Many thanks
Hi @rchai , Many thanks, really appreciated.
Thank you for the feedbacks, @chentsaimin and @ialsag01! The current runtime limit is 6h for leaderboard phase and 12h for final phase, but we are alerted the possible runtime restrictions for deep learning models. Therefore, we will double the runtime limit to 12h for leaderboard phase for now. Based on the submitted models' runtime, we will further determine the runtime limit for final phase. > Note: The runtime limit updates will not be effective until 12pm PST.
Hi @rchai , I regret to inform you that my submission 9730598 failed again. The error message was similar to the previous one (reasons: 19 file(s) not found : 'ds1c_p10k_n1_imputed.csv', 'ds1c_p10k_n2_imputed.csv', 'ds1c_p10k_n3_imputed... ), which suggests that either I have reached memory limit or time limit. There is no point of lowering the number of epochs more than 500, I am already submitting an underfitted model. In addition, reducing the model depth and width will solve the memory issue but will degrade the performance. One thing to mention, I believe 6 hours is not enough to train a deep learning model from scratch on 19 scRNA-seq datasets, especially with the given memory constraint which limits parallelism. Thanks
Dear challenge manager @rchai , Since this challenge asks for training from scratch in the docker cloud, I would like to confirm the runtime limit for the Final round. FYI, a deep learning model takes time at training; thus, I sincerely hope this challenge can extend the runtime limit to 24 hours at least. Sincerely yours, Tsai-Min
Hi @rchai , Thanks a lot. I will alter my model to reduce memory usage. Because it is a deep learning model, the number of iterations is expected to be high (1000). Though, I will set it to 500 and see how it goes.
Hi @ialsag01, I think the issue was due to the memory usage overload. Based on the error message and some testings, your submission (9730551) was working to impute the data, but was not able to compute all the data before the memory reached max limit (160g). If I reduced the iteration to 10 (one of the model's params), it was able to produce all the outputs. Meanwhile, please also take runtime into your consideration, I noticed the submission were running for ~5h, which was close to reach runtime limit (6h for leaderboard phase). I hope it helps. Thank you.
Hi @ialsag01, I am reposting your question in a new thread, since it's about invalid submissions. I can help looking into your submission today.

Invalid submission page is loading…