I'd like to understand the final leaderboard - I've got no problems, or objections, I'd just like to understand it clearly. The leaderboard is in order from from first to last. What is the metric, exactly, that decided the final order? That is, what formula collates the columns to give the order? .. Or, is it just one column that is the decider? Finally, can we see all the write-ups now? In which case, where do we find them?

Created by Peter Brooks fustbariclation
Winners declared on Final Results page for all three questions do not match with scoring tables (if we sort on average_avg_iAUC). Interesting winner on Final Results page match with weighted_avg_bal (if we short on this column), I do not know whether it by chance or some thing else.
i think honoring runner-ups by statistical-tie of the primary metric is perhaps the best going-forward for everyone involved:   1. Dream is not a horse race, but a practice of good science and statistics.   2. It protects and strengthens the original winners' position, by demonstrating it was a tight competition.   3. Students often have to commit months to a challenge. A recognition of any kind can be of great encouragement to them; an encouragement will shape their career. I still remember the first time I got a statistics tie in one of the three sub challenges of a really small scale challenge. To me that memory is always as sweet as the first kiss.   4. It helps to maintain the participation pool and to grow the community by giving out wild positive feedback. The relationship between Dream and the participants is like fish and water. Without water the fish would suffocate. You need water to breathe; you need a sea to freely swim.   they said they don't want money. one factor makes things easy
On behalf of ARMM, thank you, @yuanfang.guan for your insights and suggestions. We have been thinking about this as well, and would like to offer the following thoughts: * We want to congratulate the winning teams in each sub-challenge! Well done and we look forward to discussing the different approaches. * We hope that the runner-up teams for each of the three challenges could be honored in each sub-challenge result page. * We appreciate @yuanfang.guan 's effort in creating a combined listing of teams across the three challenges; maybe the organizers can create a similar list on the result page, of course, without prizes. * For our part, we do not have any interest in the monetary rewards - the winners have earned those. However, we would like to continue to participate in the scientific discussion related to the challenge, if the organizers see fit to invite us. Again, congratulations to the winning teams, and to the organizers and everyone who participated for making this an exciting and productive competition.
* Firstly, Please note that the leaderboards **should be** ranked by weighted iAUC **but** they sometimes are not quite right and will look sorted but have a handful of them incorrectly sorted first. I am looking into this but I recommend clicking on the header to force it to sort correctly. There is no secondary column sorting. * The bootstrapped weighted iAUCs will most likely be part of the final manuscript but I'll see if we can post them to the wiki so teams with high scores can get an idea of why they were close but not the final winner. * I believe method write-ups will be release soon but have to check on timing. * As for teams being invited to join the community phase. It is usually just the top performers. Some challenges extend invitations to other top performers and the MM Challenger organizers/leads will be discussing which avenue to take in the upcoming weeks. _If_ other teams are invited, performance across sever sub-challenge question is often the most important consideration as Yuanfang Guan suggests.
The final top performer is determined by: https://www.synapse.org/#!Synapse:syn6187098/wiki/449444   I don't have any problem or objections either, because for all challenges, there is a factor of luck and your fit to the scoring metric, and winner should be decided by what we signed up to. But, as we all have seen how much fluctuation there is in the scoring (e.g. the final winner of Sub3, me, had an iAUC of 0.4628 when it was evaluated on 15% of the data), I do have two suggestions over the next steps.   1. Please honor runner-ups in each sub-challenge result page by primary metrics determined by the statistical tie group. @Michael.Mason A challenge should not be the one winner takes all, but, a venue to encourage junior scientists towards rigorous evaluation practice. Even a runner-up position would be of tremendous help to their CV.   2. With this much of fluctuation in metrics, I can imagine the current winners would rank over a position of 10 or even lower than random on another dataset. Thus, I think it is absolutely important to pull the relative ranking of the three sub-challenges together and maybe invite the top teams into community phase, because pulled ranking indicates more stable performance across tasks, and likely stable algorithms in future datasets   I took the liberty to pull rankings together by primary metric (when a team misses one sub-challenge, it is assigned with the worst rank. Because a team that committed to all obviously contributed more to the challenge). **I removed my own ranking **because I don't want people to think I did this is for myself; I don't want to compete with others and I don't need to. I make these suggestions only because I know when I say them, it carries some weight.   ARMM 5.33333333333333 SUGO 5.66666666666667 DMIS_MM 6.33333333333333 dreamAnon 7.33333333333333 N121 7.33333333333333 RandomRainforest 7.66666666666667 Roland_Luethy 10 mitten_group 10.6666666666667 Tianle_Ma 10.6666666666667 Aditya_Pratapa 12.6666666666667 Thomas_Yu 14 Aboensis 17.3333333333333 Deargen 17.3333333333333 UGentIDLAB 18 PersianGulf 18.6666666666667 Raghava_India 18.6666666666667 Brian_White 19 GreenParrots 19 The_ruMMagers 20 RTJS 20.3333333333333 helmi2 20.6666666666667 BiSBII_UM_Myeloma 21.3333333333333 UChicago 21.3333333333333 GACT-MM 22 SecessionComputing 22 MM_UGENT_KUL 22.3333333333333 LMSM 22.6666666666667 PrecisionHunter 22.6666666666667 ZeroPage 22.6666666666667 Breizhdreholl 23 Peter_Brooks 23 LifeIsGood 23.3333333333333 Biorg-MM 23.6666666666667 i_kozlov 24.3333333333333 Michiel 24.3333333333333 Blueberry_X 24.6666666666667 AVISEK_DEYATI 25 ABU_ML_LAB 25.6666666666667 BYU 26.3333333333333 HT_Team 26.3333333333333   I think there is an obvious gap between ARMM (and SUGO) to the rest of the teams. Yet the most stable submissions **don't get ANY credit in any form** right now. Of course, there will be an issue with sharing incentives (e.g. community phase cash incentives) when inviting more teams. Maybe the teams involved here @ARMM @SUGO can comment on the situation.

Understanding the final scores page is loading…