Hi everyone,
It looks like the fast-lane queues were not validating submissions as they came in since some time yesterday. I've got it back up and is validating all the submission made since it went off line.
Sorry for any inconvenience.
If you notice your submissions are not being validated or scored in a timely many please let me know.
Thanks!
Created by Andrew Lamb andrewelamb Good catch,
That looks like an old error crept in when we updated the fast lane input file.
This has been fixed.
Thanks! Hi Andrew,
One more small fast lane issue: I'm getting an error that I think arises from a typo in the "input.csv" file. In the version of the file I downloaded from github, the final column, "ensg.expr.file", has "ds1_ensg.csv" entered in both the first and second rows (file contents copied below).
dataset.name,cancer.type,platform,scale,native.probe.type,native.expr.file,hugo.expr.file,ensg.expr.file
ds1,BRCA,Illumina HiSeq 2000,Linear,Hugo,ds1.csv,ds1.csv,ds1_ensg.csv
ds2,CRC,Illumina HiSeq 2000,Linear,Hugo,ds2.csv,ds2.csv,ds1_ensg.csv
Assuming this is also the file used in the fast lane, is it possible to update it so ensg-encoded entries can be tested?
Thanks,
Patrick
Hi Patrick
Yes, this on our end. Our EC2 was nearly full, which wasn't allowing for docker images of any significant size to be downloaded and ran. I've cleared some space and had all the affected files be re-validated. Hi Andrew,
A question regarding an error I suspect others may be encountering:
My fast lane submissions are ending with an error that didn't come up a week ago. Examples from two runs below:
"STDERR: 2019-08-21T16:47:39.897880586Z toil.batchSystems.abstractBatchSystem.InsufficientSystemResources: Requesting more disk than either physically available, or enforced by --maxDisk. Requested: 2147483648, Available: 651390976"
"STDERR: 2019-08-21T19:29:26.631909222Z toil.batchSystems.abstractBatchSystem.InsufficientSystemResources: Requesting more disk than either physically available, or enforced by --maxDisk. Requested: 2147483648, Available: 629358592"
Is there something I should do on my end to fix this? Or perhaps we're just overloading the system right now? Please advise.
Thanks,
Patrick