I am having some issue that I really can't figure out. I submitted several models to the Challenge Submission channel (9696811, 9696813, 9696814). It runs until infer on the Fast Lane dataset, then it is killed. I don't get useful information in the infer logs: /app/infer.sh: line 2: 6 Killed python infer.py It runs on both my laptop and workstation, with reasonable memory usage. Remote, it may be an out of memory error but I'm not doing anything crazy. Is there an issue with provisioning? In the pipeline logs I found: STDERR: 2019-12-16T08:12:31.064028815Z INFO:toil.leader:Issued job 'file:///var/lib/docker/volumes/workflow_orchestrator_shared/_data/21174103-6f7b-401f-85d4-2916eab59226/EHR-challenge-develop/run_synthetic_infer_docker.cwl' python F/g/jobv6OIJT with job batch system ID: 9 and cores: 1, disk: 11.0 G, and memory: 100.0 M. I thought it was the parallel backend. I changed where the error was happening to serial. Following this, on Fast Lane (9696816) it completed correctly. The prediction file was validated but the workflow status is error. I re-ran the same model on Main Challenge (9696818) and it again was killed as previous.

Created by Ivan Brugere ivanbrugere
Hello @ivanbrugere, Please try submitting again, your submission 9696816 is indeed valid, as there was an error in our pipeline, but 9696818 is actually INVALID. No prediction file was created. Best, Tom

killed job page is loading…