Hi All, I made a submission to the challenge. But I got the mail about failed workflow. This is the error message. packages/toil/leader.py", line 246, in run STDERR: 2020-04-19T17:10:17.500881801Z raise FailedJobsException(self.config.jobStore, self.toilState.totalFailedJobs, self.jobStore) STDERR: 2020-04-19T17:10:17.500885599Z toil.leader.FailedJobsException The Log Files link which are shared along with the error message are blank. Can anyone help me in fixing this ? Also, how many submissions can one make in Round 3. Would this be counted as submission ? Thanks & Regards, Abhinav Jain

Created by abhinavjain014
@ehartley , Sorry about that, I will email you your team's scores for submissions 9704148 and 9704150. Round 3 results are also now posted, so you may check your best-scoring model there as well. Best, Verena
@v.chung, I was the one who submitted the models and I can see the entries for the submissions on my Submission Dashboard, but not the scores. Thanks, Emily
@ehartley , Ah, my apologies, I misspoke -- 9704152 was the one that was submitted after the deadline, not 9704147. For the scores, you may access them from the [Submission Dashboard](https://www.synapse.org/#!Synapse:syn18065891/wiki/601192) if you are the one who submitted the models. Let me know if you are unable to see the scores! Verena
Hi @v.chung , Thanks for looking into the workflow error. Submission 9704152 (the late one) was actually just a re-submission of 9704150 that I made after I got the workflow error because I wanted to make sure that I'd done all the submission steps correctly. Would it be possible for you to send me the scores for submissions 9704148 and 9704150? Thanks again, Emily
Hi @ehartley , Thank you for your patience and help in debugging! I wanted to update you and your team that I was able to successfully run your models for submissions 9704148 and 9704150 and have updated those scores accordingly. 9704147 was unfortunately still unsuccessful (due to runtime error) and 9704147 was submitted after the deadline, so I did not include them. Let us know if you have any follow-up questions!
Hi @v.chung, I found the processing log and it has the following error statements: Exception: No 'APOLLO-2-Submission.json' file written to /output, please check inference docker ("Error collecting output for parameter 'predictions':\nmetadata-automation-challenge-master/workflow/run_docker.cwl:65:7: Did not find output file with glob pattern: '['output/APOLLO-2/APOLLO-2-Submission.json']'", {}) The APOLLO-2 input file is named "APOLLO-2-leaderboard.tsv". So shouldn't the output file be named "APOLLO-2-leaderboard-Submission.json"? Is this naming issue what caused the error? My submission IDs are: 9704152 9704150 9704148 9704147 Thanks, Emily
Hi @ehartley , Sorry for the troubles! Can you let me know what the submission IDs are? If they have failed due to a technical error in the workflow, I will re-run those submissions accordingly. Best, Verena
Hi All, I received the same failed workflow error message posted above and there aren't any logs produced. Can anyone help me fix this? I made a successful submission in Round 2 and followed the exact same process this time so I can't figure out why it's not working. Thanks, Emily
Hi @abhinavjain014, Thank you for participating! Looking into your submission, it seems that your model is not producing the expected prediction files and/or they are not located in the correct output directory, e.g. ``` No 'REMBRANDT-Submission.json' file written to /output, please check inference docker ``` Similar to previous rounds, you are allowed up to 5 _successful_ submissions during Round 3; this invalid submission will not count towards your quota. Best, Verena

No Logs Produced (Workflow Failed ) page is loading…