In the first test leaderboard round each team could make a single submission. The goal was to better estimate the computational resources needed and double-check that everything is correct. As mentioned in a previous post, we indeed found a slight problem with the background set of genes used to compute the enrichment (see below for details). This has now been fixed. In our tests we see that this can have a strong effect on the scores, depending on the submission.
**The scores from the test leaderboard round are invalid and should be ignored (they are marked with submission status CLOSED).**
--We will rescore the test submissions and update the scores as soon as they are ready (probably within a day or two).--
EDIT: The test submissions will NOT be rescored, sorry for the confusion. If you would like to have your test submission scored, please resubmit it in one of the leaderboard rounds.
**What was the problem with the background set?**
In the test round we realized that the scoring script used the union of all genes in the modules as background to compute enrichment scores, which worked fine with our baseline methods as they included all genes of the network. However, if a submission did not include all genes of the network (e.g. if some modules were excluded because of their size), the background set of genes was incomplete and different from other submissions, which is obviously not good to compare their scores. We are now always using the genes of a given network as background.