I have a model I submitted to the main challenge track. On my local docker it runs to completion in around 30 minutes, but on the remote execution it times out in the synthetic evaluation. Has anything been changed? This is a very basic Random Forest model that spends about half the time only ingesting the data. https://www.synapse.org/#!Synapse:syn21052011

Created by Ivan Brugere ivanbrugere
Can you share your docker repository with me that uses the measurement table? I can run it on our fast lane with no time quota and get a runtime. You should be able to use the sharing settings on the synapse repository page.
I am timing out in > 1 hour on the "EHR Dream Challenge Submission" track, this is on synthetic_small in the submission first stage. This is 53 minutes without the measurements table, and model selection on train/validation resampling (N=3 splits). Running in my local docker aren't comparable by ~3x (slower locally, not sure why) so it's hard to estimate the total runtime with measurements.
So synthetic_large is supposed to estimate runtime on the UW dataset which had a time quota of 6 hours and which we just changed to 10 hours (announcement incoming). The synthetic_small (aka synthetic_fast_lane) has a time limit of 1 hour in the Fast lane and in the first stage of the Main submission queue. >In train I'm only doing a model selection of traditional models (e.g. RF), and am unable to touch the measurements file. If I do a more robust resampling, or use measurement features, I time out. So you are timing out when you use measurement features from the synthetic_small/synthetic_fast_lane files or when you use the synthetic_large? And are you timing out in under an hour?
Hi @trberg, I thought the timing was on synthetic_large but it was actually synthetic_small. I have the timings from my remote docker run: Indexing-Train: 71 Total-Train:3117 Indexing-Test: 45 Total-Test: 76 In train I'm only doing a model selection of traditional models (e.g. RF), and am unable to touch the measurements file. If I do a more robust resampling, or use measurement features, I time out. Is synthetic_large intended to estimate the runtime of the UW dataset? If so I don't understand why it is allotted only an hour if the allotment for UW is larger.
HI @ivanbrugere, Did you run this model on your local docker with the [new synthetic data](https://www.synapse.org/#!Synapse:syn20685954)? Use the fast lane data to estimate the time. The new synthetic data is a little more complicated. Let us know how long this model takes on your local environment if its more than an hour. Thanks, Tim

Submissions timing out page is loading…