Hello,
The first round has passed (Sep/21). When should we have the first leaderboard available? And also when will the new data for the second round will be online?
Many thanks for organizing this!
BioTuring team.
Created by BioTuring bioturing Time allowing, we'll try to do metrics on runtime/memory usage of the different methods as part of the final paper/write-up. But that won't be reflected in the scoring of methods.
However, there will probably be a quota for VM allocation. So if methods takes too much time/resources, it will get cut off before completing. We haven't determined a final number yet, but we will probably have a 'spending cap' for requested resources (ie the machine ResourceRequirement requirement in the CWL doc), for the SMC-Het challenge it worked out to be $7.00 per entry per tumor (about 35 hours on a n1-standard-4 machine). We'll be checking performance profiles of entries as we determine the budget of the evaluation phases which will work into the quotas. Thanks so much, Kyle and others for the great effort. I know it's a very time-consuming step. Do you think it is worth adding the memory and run-time as well in the benchmark, besides the quality value?
Currently we're collecting all the entries and making sure that we are able to reproduce the pipelines on our end. We'll probably be contacting groups if we have issues in running their code. Then we'll be running the set of submitted methods on a held out testing set and posting the leaderboard.