We are happy to announce the tenative winners (pending method write up and code submission and review) of the Final Round of ENCODE Imputation Challenge!
Winner: Team Hongyang_Li_and_Yuanfang_Guan (v1)
Runner-up: We have a tie between Team Lavawizard and Team Guacamole
Second Runner-up: Team imp
All the details are announced at the Synapse page below including browser tracks of all the submissions and ground truth tracks
https://www.synapse.org/#!Synapse:syn17083203/wiki/597122
Congratulations to the winners! And also a huge thanks to all the participants.
Please let us know if you have any questions.
**Whats next!**
**CODE:**
- We are setting up cloud buckets for all the teams to EXACTLY replicate their environment and code base. We will be providing all the challenge data in this bucket. We expect to be able to run your method within this cloud bucket and minimally reproduce your results. This is especially mandatory for the top 3 teams. We are happy to assist with any technical issues through this process.
- We will require you to submit your exact code that was used for the challenge via a tagged version on Github or some other bonafide code repository.
**WRITE UP**
We will require a detailed write up for your method (as if it is the methods section of a paper)
We hope you can finish write up and code submission within 3 weeks time after we provide instructions to set up and access the cloud buckets (we expect to announce this later this week)
**WINNERS & PRIZE MONEY **
Final winners will be officially announced after validation of code and write up. Prize money will be distributed thereafter.
**VIDEOCONFERENCE**
Once we have validated and announced final winners, we will have a videoconference where the top 3 teams will present their methods and results
**COMPANION PAPERS **
We encourage all participants to start working on their companion papers. There are no restrictions on when you can submit or publish these. The only restriction is that your individual papers focus on your individual methods and avoid global comparisons across the challenge. And, we just need you to send the organizers a draft before you submit.
**PLANS AND PARTICIPATION IN THE FLAGSHIP CHALLENGE PAPER**
We of course plan to perform a thorough comparative analysis of all the methods and report the challenge results.
But we also plan to explore several other aspects
- Systematic analysis of errors and their biological implications
- Impact of cross-sample normalization methods and other units for the tracks
- Other scoring measures
- Other ranking measures
From an early look at the prediction performance and ranking, it appears there are a lot of methods with similar performance right below the top 3. Without having had a chance to look at the actual methods used by all the participants, it is likely that there are several complementary good ideas.
Hence, we have decided that we will tentatively aim to develop a new unified, easy-to-use imputation tool integrating the best ideas of all the methods (especially the top 3). We hope you will all actively participate in this process. We expect to distribute credit very generously in any collaborative output. If such a new, improved imputation tool does arise, we plan to use it to impute across the entire ENCODE compendium. This will be one of the goals of the unified challenge paper.
We will send more information about these plans as we move forward.
Thanks,
Imputation Challenge Organizers