Hi, the name of the model and ID in the leaderboard for challenge 3 don't match with what we submitted to challenge 3: The name should be "dawn1v4" and the ID: 9652423 according to the email. Did you post the results of challenge 3 or challenge 1? Carlo (mitten_group team)

Created by Carlo Piermarocchi piermaro
ok... just please remember the dozens of bugs i found when you discuss with justin.
It should be noted that neither gene fusions nor copy number variations were supplied for challenge question 1. This may be something to consider for the community phase. In past challenges teams that were best performers were invited to the community phase. In some challenges teams that performed well in multiple questions but were not best performers are invited to the community phase. I expect this is something the challenge organizers will be discussing in the next few weeks.
@jcabio   i think the top score in sub1 was better than thomas yu's baseline, but may not statistically different, just some random fluctuation. I believe that's what justin said.   but I observed this long ago, through my dozen of projects involving GWAS data, that I never seen it adds on top of clinical. see my prophetic write up 3 years ago: https://www.synapse.org/#!Synapse:syn2368045/wiki/64596 that 'Rely primarily on GWAS while clinical information is present. That's one thing that only works in papers; no, they never work in any of the project I did in my life.'   i know this conclusion is kind of anti-social. but it is just i never seen it worked in practice how am i supposed to believe in it.   but I agree with you, jason, this would be an interesting point to explore. @Michael.Mason if my score by any chance by any metric could be marginally statistically tied, it would be really nice to be invited in community phase
It seems to me that even if the top teams statistically tied with the baseline model, that is itself an interesting outcome. I would hope that the tied teams would be invited to participate in the community phase to discuss what could have been done differently and what might be a good direction for future exploration.
>A side note: the slides Yuanfang Guan saw should not have had any team names on them, only names for baseline models. right, i could not see from where i sat. but the tieing info and that sub1 no obvious signal than baseline clinical i heard pretty clear. So guys you can start to complain/propose suggestions to move forward.
Apologies for the delayed response. I'll have to double check with some folks here. If it were allowable (but I don't think it is) and one of the submissions was the top performer in question 3 then only the people on that specific team could be eligible for incentives. The folks on the other (losing) team would **not** be eligible. So if one person disagreed on the method to use and they split off and submitted as an individual and had the best performing method then they would be eligible for authorship/financial award but not the members of their former team. Again I think this is most likely not allowed. A side note: the slides Yuanfang Guan saw should not have had any team names on them, only names for baseline models. Kinde regards, Mike
also, @mushthofa   sorry you are not allowed to split teams and submit 2.   The reason is obvious, there is effectively only one cohort in sub3, as the other one is so small: one of the two models (clinical only, and expression only model) will be in the statistically-tie winning groups. Given how much noise we have seen, I don't think whose-ever the top model would be deliverable to other cohorts either. Everyone knows this point well, and would have had a difficult time deciding which one to use. But everyone will have to bet. that's it.
a teaser from dream conference just ended. in sub1 there will be \~8 teams statistically tie, no apparent signal from genetics. and in sub2 \~5 teams statistically tie. i wish i had taken a picture of the slides so that from the length of the team name i could figure out where i am.....
Dear Mike, In the rules you linked, it is mentioned that teams may disband, provided that it is not to circumvent submission limit. Does this apply in the following scenario: a team made joint submissions on subchallenges 1 and 2 (but they used completely different methods for both subchallenges) and because of disagreement in the final method to be used in subchallenge 3, they decide to disband and make separate submissions for subchallenge 3 (again with completely different methods)? Thanks!
A challenge participant cannot be part of multiple teams. Please see the [DREAM Challenges Official Rules, Section 4](https://www.synapse.org/#!Synapse:syn10144147/wiki/448310).
I would say theoretically it is a no @Michael.Mason ?   but it happens, especially in challenges where the bias from cohorts overrides the information from data, or the data is so noisy that opportunism works better than methodology. In challenges where dataset is huge and not much test set bias, I almost never see that happens.
BTW Mike, if I may ask once again: what is the policy in multiple membership in several teams in the challenge: is it allowed, e.g., (for a PI) to be a contributing member of several teams all participating, given that the teams have different members, and the approach used by these teams are (significantly) different? Thanks!
Thanks Mushtofa, That is now fixed.
Hi Mike, I think mitten_group also has 2 submissions scored.
Apologies for the confusion. This leaderboard does contain multiple submissions from Brian White who is a challenge organizer. Those submissions are baseline models UAMS70, EMC92+age and just age and simply need to be relabeled as such. If you see multiple submissions from other teams / individuals please let us know.
@deleapoli: It looks like the leaderboard is accidentally showing scores for _all_ round 2 submissions instead of just the _last_ round 2 submission for each group.
Dear organizers, We found few cases that multiple scores of individual team exist in the Round 2 Leader board, it is normal? Regards, Bruce
There was indeed a mistake made in the release. It is now corrected. I apologies for the inconvenience.
this is indeed final sub1 accidentally released. except umas was left out in table..... i wonder why there is absolutely no cor in performance from round 3. further, based on how much deviation it has been through the round, looks no one is statistically better than thomas yu's baseline
Similar question with Carlo. Additionally, the weighted iAUC does not make sense since it cannot be the average of the two iAUC there, unless there is at least another iAUC not shown, which contributes to the weighted iAUC shown.

Challenge 3 Leaderboard page is loading…